On the integration of reinforcement learning and approximate reasoning for control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
The author discusses the importance of strengthening the knowledge representation characteristic of reinforcement learning techniques using methods such as approximate reasoning. The ARIC (approximate reasoning-based intelligent control) architecture is an example of such a hybrid approach in which the fuzzy control rules are modified (fine-tuned) using reinforcement learning. ARIC also demonstrates that it is possible to start with an approximately correct control knowledge base and learn to refine this knowledge through further experience. On the other hand, techniques such as the TD (temporal difference) algorithm and Q-learning establish stronger theoretical foundations for their use in adaptive control and also in stability analysis of hybrid reinforcement learning and approximate reasoning-based controllers.
ERIC Educational Resources Information Center
CAPOBIANCO, RUDOLPH J.; AND OTHERS
A STUDY WAS MADE TO ESTABLISH AND ANALYZE THE METHODS OF SOLVING INDUCTIVE REASONING PROBLEMS BY MENTALLY RETARDED CHILDREN. THE MAJOR OBJECTIVES WERE--(1) TO EXPLORE AND DESCRIBE REASONING IN MENTALLY RETARDED CHILDREN, (2) TO COMPARE THEIR METHODS WITH THOSE UTILIZED BY NORMAL CHILDREN OF APPROXIMATELY THE SAME MENTAL AGE, (3) TO EXPLORE THE…
Artificial neural networks and approximate reasoning for intelligent control in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.
Probabilistic Reasoning for Plan Robustness
NASA Technical Reports Server (NTRS)
Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.
2005-01-01
A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.
Mean-field approximation for spacing distribution functions in classical systems
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2012-01-01
We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.
Mean-field approximation for spacing distribution functions in classical systems.
González, Diego Luis; Pimpinelli, Alberto; Einstein, T L
2012-01-01
We propose a mean-field method to calculate approximately the spacing distribution functions p((n))(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p((n))(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed. © 2012 American Physical Society
Application of plausible reasoning to AI-based control systems
NASA Technical Reports Server (NTRS)
Berenji, Hamid; Lum, Henry, Jr.
1987-01-01
Some current approaches to plausible reasoning in artificial intelligence are reviewed and discussed. Some of the most significant recent advances in plausible and approximate reasoning are examined. A synergism among the techniques of uncertainty management is advocated, and brief discussions on the certainty factor approach, probabilistic approach, Dempster-Shafer theory of evidence, possibility theory, linguistic variables, and fuzzy control are presented. Some extensions to these methods are described, and the applications of the methods are considered.
Incorporation of varying types of temporal data in a neural network
NASA Technical Reports Server (NTRS)
Cohen, M. E.; Hudson, D. L.
1992-01-01
Most neural network models do not specifically deal with temporal data. Handling of these variables is complicated by the different uses to which temporal data are put, depending on the application. Even within the same application, temporal variables are often used in a number of different ways. In this paper, types of temporal data are discussed, along with their implications for approximate reasoning. Methods for integrating approximate temporal reasoning into existing neural network structures are presented. These methods are illustrated in a medical application for diagnosis of graft-versus-host disease which requires the use of several types of temporal data.
Energy conservation - A test for scattering approximations
NASA Technical Reports Server (NTRS)
Acquista, C.; Holland, A. C.
1980-01-01
The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.
Overview of psychiatric ethics IV: the method of casuistry.
Robertson, Michael; Ryan, Christopher; Walter, Garry
2007-08-01
The aim of this paper is to describe the method of ethical analysis known as casuistry and consider its merits as a basis of ethical deliberation in psychiatry. Casuistry approximates the legal arguments of common law. It examines ethical dilemmas by adopting a taxonomic approach to 'paradigm' cases, using a technique akin to that of normative analogical reasoning. Casuistry offers a useful method in ethical reasoning through providing a practical means of evaluating the merits of a particular course of action in a particular clinical situation. As a method ethical moral reasoning in psychiatry, casuistry suffers from a paucity of paradigm cases and its failure to fully contextualize ethical dilemmas by relying on common morality theory as its basis.
Detecting Edges in Images by Use of Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steve
2003-01-01
A method of processing digital image data to detect edges includes the use of fuzzy reasoning. The method is completely adaptive and does not require any advance knowledge of an image. During initial processing of image data at a low level of abstraction, the nature of the data is indeterminate. Fuzzy reasoning is used in the present method because it affords an ability to construct useful abstractions from approximate, incomplete, and otherwise imperfect sets of data. Humans are able to make some sense of even unfamiliar objects that have imperfect high-level representations. It appears that to perceive unfamiliar objects or to perceive familiar objects in imperfect images, humans apply heuristic algorithms to understand the images
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
Coherent Anomaly Method Calculation on the Cluster Variation Method. II.
NASA Astrophysics Data System (ADS)
Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya
The critical exponents of the bond percolation model are calculated in the D(= 2,3,…)-dimensional simple cubic lattice on the basis of Suzuki's coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.
NASA Technical Reports Server (NTRS)
Hoebel, Louis J.
1993-01-01
The problem of plan generation (PG) and the problem of plan execution monitoring (PEM), including updating, queries, and resource-bounded replanning, have different reasoning and representation requirements. PEM requires the integration of qualitative and quantitative information. PEM is the receiving of data about the world in which a plan or agent is executing. The problem is to quickly determine the relevance of the data, the consistency of the data with respect to the expected effects, and if execution should continue. Only spatial and temporal aspects of the plan are addressed for relevance in this work. Current temporal reasoning systems are deficient in computational aspects or expressiveness. This work presents a hybrid qualitative and quantitative system that is fully expressive in its assertion language while offering certain computational efficiencies. In order to proceed, methods incorporating approximate reasoning using hierarchies, notions of locality, constraint expansion, and absolute parameters need be used and are shown to be useful for the anytime nature of PEM.
NASA Astrophysics Data System (ADS)
Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya
1991-10-01
The critical exponents of the bond percolation model are calculated in the D(=2, 3, \\cdots)-dimensional simple cubic lattice on the basis of Suzuki’s coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.
Advanced Methods of Approximate Reasoning
1990-11-30
about Knowledge and Action. Technical Note 191, Menlo Park, California: SRI International. 1980 . 20 [26] N.J. Nilsson. Probabilistic logic. Artificial...reasoning. Artificial Intelligence, 13:81-132, 1980 . S[30 R. Reiter. On close world data bases. In H. Gallaire and J. Minker, editors, Logic and Data...specially grateful to Dr. Abraham Waksman of the Air Force Office of Scientific Research and Dr. David Hislop of the Army Research Office for their
Advanced Concepts and Methods of Approximate Reasoning
1989-12-01
immeasurably by numerous conversations and discussions with Nadal Bat- tle, Hamid Berenji , Piero Bonissone, Bernadette Bouchon-Meunier, Miguel Delgado, Di...comments of Claudi Alsina, Hamid Berenji , Piero Bonissone, Didier Dubois, Francesc Esteva, Oscar Firschein, Marty Fischler, Pascal Fua, Maria Angeles
A Summary of Research in Science Education--1984.
ERIC Educational Resources Information Center
Lawson, Anton E.; And Others
This review covers approximately 300 studies, including journal articles, dissertations, and papers presented at conferences. The studies are organized under these major headings: status surveys; scientific reasoning; elementary school science (student achievement, student conceptions/misconceptions, student curiosity/attitudes, teaching methods,…
Detection of Natural Fractures from Observed Surface Seismic Data Based on a Linear-Slip Model
NASA Astrophysics Data System (ADS)
Chen, Huaizhen; Zhang, Guangzhi
2018-03-01
Natural fractures play an important role in migration of hydrocarbon fluids. Based on a rock physics effective model, the linear-slip model, which defines fracture parameters (fracture compliances) for quantitatively characterizing the effects of fractures on rock total compliance, we propose a method to detect natural fractures from observed seismic data via inversion for the fracture compliances. We first derive an approximate PP-wave reflection coefficient in terms of fracture compliances. Using the approximate reflection coefficient, we derive azimuthal elastic impedance as a function of fracture compliances. An inversion method to estimate fracture compliances from seismic data is presented based on a Bayesian framework and azimuthal elastic impedance, which is implemented in a two-step procedure: a least-squares inversion for azimuthal elastic impedance and an iterative inversion for fracture compliances. We apply the inversion method to synthetic and real data to verify its stability and reasonability. Synthetic tests confirm that the method can make a stable estimation of fracture compliances in the case of seismic data containing a moderate signal-to-noise ratio for Gaussian noise, and the test on real data reveals that reasonable fracture compliances are obtained using the proposed method.
NASA Astrophysics Data System (ADS)
Bonetto, P.; Qi, Jinyi; Leahy, R. M.
2000-08-01
Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.
NASA Astrophysics Data System (ADS)
Liu, Jian; Ren, Zhongzhou; Xu, Chang
2018-07-01
Combining the modified Skyrme-like model and the local density approximation model, the slope parameter L of symmetry energy is extracted from the properties of finite nuclei with an improved iterative method. The calculations of the iterative method are performed within the framework of the spherical symmetry. By choosing 200 neutron rich nuclei on 25 isotopic chains as candidates, the slope parameter is constrained to be 50 MeV < L < 62 MeV. The validity of this method is examined by the properties of finite nuclei. Results show that reasonable descriptions on the properties of finite nuclei and nuclear matter can be obtained together.
NASA Astrophysics Data System (ADS)
Khatami, Ehsan; Macridin, Alexandru; Jarrell, Mark
2008-03-01
Recently, several authors have employed the ``glue" approximation for the Cuprates in which the full pairing vertex is approximated by the spin susceptibility. We study this approximation using Quantum Monte Carlo Dynamical Cluster Approximation methods on a 2D Hubbard model. By considering a reasonable finite value for the next nearest neighbor hopping, we find that this ``glue" approximation, in the current form, does not capture the correct pairing symmetry. Here, d-wave is not the leading pairing symmetry while it is the dominant symmetry using the ``exact" QMC results. We argue that the sensitivity of this approximation to the band structure changes leads to this inconsistency and that this form of interaction may not be the appropriate description of the pairing mechanism in Cuprates. We suggest improvements to this approximation which help to capture the the essential features of the QMC data.
Conservative Analytical Collision Probabilities for Orbital Formation Flying
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2004-01-01
The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.
Conservative Analytical Collision Probability for Design of Orbital Formations
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2004-01-01
The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.
DOT National Transportation Integrated Search
1992-08-26
This document provides the basic information needed to estimate a general : probability of collision in Low Earth Orbit (LEO). Although the method : described in this primer is a first order approximation, its results are : reasonable. Furthermore, t...
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
2012-12-01
acoustics One begins with Eikonal equation for the acoustic phase function S(t,x) as derived from the geometric acoustics (high frequency) approximation to...zb(x) is smooth and reasonably approximated as piecewise linear. The time domain ray (characteristic) equations for the Eikonal equation are ẋ(t)= c...travel time is affected, which is more physically relevant than global error in φ since it provides the phase information for the Eikonal equation (2.1
Accurate and Efficient Approximation to the Optimized Effective Potential for Exchange
NASA Astrophysics Data System (ADS)
Ryabinkin, Ilya G.; Kananenka, Alexei A.; Staroverov, Viktor N.
2013-07-01
We devise an efficient practical method for computing the Kohn-Sham exchange-correlation potential corresponding to a Hartree-Fock electron density. This potential is almost indistinguishable from the exact-exchange optimized effective potential (OEP) and, when used as an approximation to the OEP, is vastly better than all existing models. Using our method one can obtain unambiguous, nearly exact OEPs for any reasonable finite one-electron basis set at the same low cost as the Krieger-Li-Iafrate and Becke-Johnson potentials. For all practical purposes, this solves the long-standing problem of black-box construction of OEPs in exact-exchange calculations.
AVCS Simulator Test Plan and Design Guide
NASA Technical Reports Server (NTRS)
Shelden, Stephen
2001-01-01
Internal document for communication of AVCS direction and documentation of simulator functionality. Discusses methods for AVCS simulation evaluation of pilot functions, implementation strategy of varying functional representation of pilot tasks (by instantiations of a base AVCS to reasonably approximate the interface of various vehicles -- e.g. Altair, GlobalHawk, etc.).
Monte Carlo simulations of medical imaging modalities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estes, G.P.
Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computermore » power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.« less
Volterra integral equation-factorisation method and nucleus-nucleus elastic scattering
NASA Astrophysics Data System (ADS)
Laha, U.; Majumder, M.; Bhoi, J.
2018-04-01
An approximate solution for the nuclear Hulthén plus atomic Hulthén potentials is constructed by solving the associated Volterra integral equation by series substitution method. Within the framework of supersymmetry-inspired factorisation method, this solution is exploited to construct higher partial wave interactions. The merit of our approach is examined by computing elastic scattering phases of the α {-}α system by the judicious use of phase function method. Reasonable agreements in phase shifts are obtained with standard data.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
A diffusion approximation for ocean wave scatterings by randomly distributed ice floes
NASA Astrophysics Data System (ADS)
Zhao, Xin; Shen, Hayley
2016-11-01
This study presents a continuum approach using a diffusion approximation method to solve the scattering of ocean waves by randomly distributed ice floes. In order to model both strong and weak scattering, the proposed method decomposes the wave action density function into two parts: the transmitted part and the scattered part. For a given wave direction, the transmitted part of the wave action density is defined as the part of wave action density in the same direction before the scattering; and the scattered part is a first order Fourier series approximation for the directional spreading caused by scattering. An additional approximation is also adopted for simplification, in which the net directional redistribution of wave action by a single scatterer is assumed to be the reflected wave action of a normally incident wave into a semi-infinite ice cover. Other required input includes the mean shear modulus, diameter and thickness of ice floes, and the ice concentration. The directional spreading of wave energy from the diffusion approximation is found to be in reasonable agreement with the previous solution using the Boltzmann equation. The diffusion model provides an alternative method to implement wave scattering into an operational wave model.
The frozen nucleon approximation in two-particle two-hole response functions
Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.; ...
2017-07-10
Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less
The frozen nucleon approximation in two-particle two-hole response functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.
Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less
Calculation of wing response to gusts and blast waves with vortex lift effect
NASA Technical Reports Server (NTRS)
Chao, D. C.; Lan, C. E.
1983-01-01
A numerical study of the response of aircraft wings to atmospheric gusts and to nuclear explosions when flying at subsonic speeds is presented. The method is based upon unsteady quasi-vortex lattice method, unsteady suction analogy and Pade approximant. The calculated results, showing vortex lag effect, yield reasonable agreement with experimental data for incremental lift on wings in gust penetration and due to nuclear blast waves.
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Zaky, M. A.
2015-01-01
In this paper, we propose and analyze an efficient operational formulation of spectral tau method for multi-term time-space fractional differential equation with Dirichlet boundary conditions. The shifted Jacobi operational matrices of Riemann-Liouville fractional integral, left-sided and right-sided Caputo fractional derivatives are presented. By using these operational matrices, we propose a shifted Jacobi tau method for both temporal and spatial discretizations, which allows us to present an efficient spectral method for solving such problem. Furthermore, the error is estimated and the proposed method has reasonable convergence rates in spatial and temporal discretizations. In addition, some known spectral tau approximations can be derived as special cases from our algorithm if we suitably choose the corresponding special cases of Jacobi parameters θ and ϑ. Finally, in order to demonstrate its accuracy, we compare our method with those reported in the literature.
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay equivalence often fails to converge, while flow equivalent aggregation can lead to potentially bad results if a strong dependence of the mean completion time on the interarrival process exists.
The Torsion of Members Having Sections Common in Aircraft Construction
NASA Technical Reports Server (NTRS)
Trayer, George W; March, H W
1930-01-01
Within recent years a great variety of approximate torsion formulas and drafting-room processes have been advocated. In some of these, especially where mathematical considerations are involved, the results are extremely complex and are not generally intelligible to engineers. The principal object of this investigation was to determine by experiment and theoretical investigation how accurate the more common of these formulas are and on what assumptions they are founded and, if none of the proposed methods proved to be reasonable accurate in practice, to produce simple, practical formulas from reasonably correct assumptions, backed by experiment. A second object was to collect in readily accessible form the most useful of known results for the more common sections. Formulas for all the important solid sections that have yielded to mathematical treatment are listed. Then follows a discussion of the torsion of tubular rods with formulas both rigorous and approximate.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
NASA Astrophysics Data System (ADS)
Cheng, Rongjun; Sun, Fengxin; Wei, Qi; Wang, Jufeng
2018-02-01
Space-fractional advection-dispersion equation (SFADE) can describe particle transport in a variety of fields more accurately than the classical models of integer-order derivative. Because of nonlocal property of integro-differential operator of space-fractional derivative, it is very challenging to deal with fractional model, and few have been reported in the literature. In this paper, a numerical analysis of the two-dimensional SFADE is carried out by the element-free Galerkin (EFG) method. The trial functions for the SFADE are constructed by the moving least-square (MLS) approximation. By the Galerkin weak form, the energy functional is formulated. Employing the energy functional minimization procedure, the final algebraic equations system is obtained. The Riemann-Liouville operator is discretized by the Grünwald formula. With center difference method, EFG method and Grünwald formula, the fully discrete approximation schemes for SFADE are established. Comparing with exact results and available results by other well-known methods, the computed approximate solutions are presented in the format of tables and graphs. The presented results demonstrate the validity, efficiency and accuracy of the proposed techniques. Furthermore, the error is computed and the proposed method has reasonable convergence rates in spatial and temporal discretizations.
Fast Simulations of Gas Sloshing and Cold Front Formation
NASA Technical Reports Server (NTRS)
Roediger, E.; ZuHone, J. A.
2011-01-01
We present a simplified and fast method for simulating minor mergers between galaxy clusters. Instead of following the evolution of the dark matter halos directly by the N-body method, we employ a rigid potential approximation for both clusters. The simulations are run in the rest frame of the more massive cluster and account for the resulting inertial accelerations in an optimised way. We test the reliability of this method for studies of minor merger induced gas sloshing by performing a one-to-one comparison between our simulations and hydro+N-body ones. We find that the rigid potential approximation reproduces the sloshing-related features well except for two artefacts: the temperature just outside the cold fronts is slightly over-predicted, and the outward motion of the cold fronts is delayed by typically 200 Myr. We discuss reasons for both artefacts.
Fast Simulations of Gas Sloshing and Cold Front Formation
NASA Technical Reports Server (NTRS)
Roediger, E.; ZuHone, J. A.
2012-01-01
We present a simplified and fast method for simulating minor mergers between galaxy clusters. Instead of following the evolution of the dark matter halos directly by the N-body method, we employ a rigid potential approximation for both clusters. The simulations are run in the rest frame of the more massive cluster and account for the resulting inertial accelerations in an optimised way. We test the reliability of this method for studies of minor merger induced gas sloshing by performing a one-to-one comparison between our simulations and hydro+N-body ones. We find that the rigid potential approximation reproduces the sloshing-related features well except for two artifacts: the temperature just outside the cold fronts is slightly over-predicted, and the outward motion of the cold fronts is delayed by typically 200 Myr. We discuss reasons for both artifacts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.
LinguisticBelief is a Java computer code that evaluates combinations of linguistic variables using an approximate reasoning rule base. Each variable is comprised of fuzzy sets, and a rule base describes the reasoning on combinations of variables fuzzy sets. Uncertainty is considered and propagated through the rule base using the belief/plausibility measure. The mathematics of fuzzy sets, approximate reasoning, and belief/ plausibility are complex. Without an automated tool, this complexity precludes their application to all but the simplest of problems. LinguisticBelief automates the use of these techniques, allowing complex problems to be evaluated easily. LinguisticBelief can be used free of chargemore » on any Windows XP machine. This report documents the use and structure of the LinguisticBelief code, and the deployment package for installation client machines.« less
Electronic properties of excess Cr at Fe site in FeCr{sub 0.02}Se alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Sandeep, E-mail: sandeepk.iitb@gmail.com; Singh, Prabhakar P.
2015-06-24
We have studied the effect of substitution of transition-metal chromium (Cr) in excess on Fe sub-lattice in the electronic structure of iron-selenide alloys, FeCr{sub 0.02}Se. In our calculations, we used Korringa-Kohn-Rostoker coherent potential approximation method in the atomic sphere approximation (KKR-ASA-CPA). We obtained different band structure of this alloy with respect to the parent FeSe and this may be reason of changing their superconducting properties. We did unpolarized calculations for FeCr{sub 0.02}Se alloy in terms of density of states (DOS) and Fermi surfaces. The local density approximation (LDA) is used in terms of exchange correlation potential.
Approximate Model Checking of PCTL Involving Unbounded Path Properties
NASA Astrophysics Data System (ADS)
Basu, Samik; Ghosh, Arka P.; He, Ru
We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as
Discovering relevance knowledge in data: a growing cell structures approach.
Azuaje, F; Dubitzky, W; Black, N; Adamson, K
2000-01-01
Both information retrieval and case-based reasoning systems rely on effective and efficient selection of relevant data. Typically, relevance in such systems is approximated by similarity or indexing models. However, the definition of what makes data items similar or how they should be indexed is often nontrivial and time-consuming. Based on growing cell structure artificial neural networks, this paper presents a method that automatically constructs a case retrieval model from existing data. Within the case-based reasoning (CBR) framework, the method is evaluated for two medical prognosis tasks, namely, colorectal cancer survival and coronary heart disease risk prognosis. The results of the experiments suggest that the proposed method is effective and robust. To gain a deeper insight and understanding of the underlying mechanisms of the proposed model, a detailed empirical analysis of the models structural and behavioral properties is also provided.
Exponential Methods for the Time Integration of Schroedinger Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cano, B.; Gonzalez-Pachon, A.
2010-09-30
We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.
Estimating ice particle scattering properties using a modified Rayleigh-Gans approximation
NASA Astrophysics Data System (ADS)
Lu, Yinghui; Clothiaux, Eugene E.; Aydin, Kültegin; Verlinde, Johannes
2014-09-01
A modification to the Rayleigh-Gans approximation is made that includes self-interactions between different parts of an ice crystal, which both improves the accuracy of the Rayleigh-Gans approximation and extends its applicability to polarization-dependent parameters. This modified Rayleigh-Gans approximation is both efficient and reasonably accurate for particles with at least one dimension much smaller than the wavelength (e.g., dendrites at millimeter or longer wavelengths) or particles with sparse structures (e.g., low-density aggregates). Relative to the Generalized Multiparticle Mie method, backscattering reflectivities at horizontal transmit and receive polarization (HH) (ZHH) computed with this modified Rayleigh-Gans approach are about 3 dB more accurate than with the traditional Rayleigh-Gans approximation. For realistic particle size distributions and pristine ice crystals the modified Rayleigh-Gans approach agrees with the Generalized Multiparticle Mie method to within 0.5 dB for ZHH whereas for the polarimetric radar observables differential reflectivity (ZDR) and specific differential phase (KDP) agreement is generally within 0.7 dB and 13%, respectively. Compared to the A-DDA code, the modified Rayleigh-Gans approximation is several to tens of times faster if scattering properties for different incident angles and particle orientations are calculated. These accuracies and computational efficiencies are sufficient to make this modified Rayleigh-Gans approach a viable alternative to the Rayleigh-Gans approximation in some applications such as millimeter to centimeter wavelength radars and to other methods that assume simpler, less accurate shapes for ice crystals. This method should not be used on materials with dielectric properties much different from ice and on compact particles much larger than the wavelength.
Transport of phase space densities through tetrahedral meshes using discrete flow mapping
NASA Astrophysics Data System (ADS)
Bajars, Janis; Chappell, David J.; Søndergaard, Niels; Tanner, Gregor
2017-01-01
Discrete flow mapping was recently introduced as an efficient ray based method determining wave energy distributions in complex built up structures. Wave energy densities are transported along ray trajectories through polygonal mesh elements using a finite dimensional approximation of a ray transfer operator. In this way the method can be viewed as a smoothed ray tracing method defined over meshed surfaces. Many applications require the resolution of wave energy distributions in three-dimensional domains, such as in room acoustics, underwater acoustics and for electromagnetic cavity problems. In this work we extend discrete flow mapping to three-dimensional domains by propagating wave energy densities through tetrahedral meshes. The geometric simplicity of the tetrahedral mesh elements is utilised to efficiently compute the ray transfer operator using a mixture of analytic and spectrally accurate numerical integration. The important issue of how to choose a suitable basis approximation in phase space whilst maintaining a reasonable computational cost is addressed via low order local approximations on tetrahedral faces in the position coordinate and high order orthogonal polynomial expansions in momentum space.
Iterative CT reconstruction using coordinate descent with ordered subsets of data
NASA Astrophysics Data System (ADS)
Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.
2016-04-01
Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.
Population genetics inference for longitudinally-sampled mutants under strong selection.
Lacerda, Miguel; Seoighe, Cathal
2014-11-01
Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.
Elastic scattering of low-energy electrons by nitromethane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopes, A. R.; D'A Sanchez, S.; Bettega, M. H. F.
2011-06-15
In this work, we present integral, differential, and momentum transfer cross sections for elastic scattering of low-energy electrons by nitromethane, for energies up to 10 eV. We calculated the cross sections using the Schwinger multichannel method with pseudopotentials, in the static-exchange and in the static-exchange plus polarization approximations. The computed integral cross sections show a {pi}* shape resonance at 0.70 eV in the static-exchange-polarization approximation, which is in reasonable agreement with experimental data. We also found a {sigma}* shape resonance at 4.8 eV in the static-exchange-polarization approximation, which has not been previously characterized by the experiment. We also discuss howmore » these resonances may play a role in the dissociation process of this molecule.« less
Multimodal far-field acoustic radiation pattern: An approximate equation
NASA Technical Reports Server (NTRS)
Rice, E. J.
1977-01-01
The far-field sound radiation theory for a circular duct was studied for both single mode and multimodal inputs. The investigation was intended to develop a method to determine the acoustic power produced by turbofans as a function of mode cut-off ratio. With reasonable simplifying assumptions the single mode radiation pattern was shown to be reducible to a function of mode cut-off ratio only. With modal cut-off ratio as the dominant variable, multimodal radiation patterns can be reduced to a simple explicit expression. This approximate expression provides excellent agreement with an exact calculation of the sound radiation pattern using equal acoustic power per mode.
NASA Astrophysics Data System (ADS)
Chen, Liping; Zheng, Renhui; Shi, Qiang; Yan, YiJing
2010-01-01
We extend our previous study of absorption line shapes of molecular aggregates using the Liouville space hierarchical equations of motion (HEOM) method [L. P. Chen, R. H. Zheng, Q. Shi, and Y. J. Yan, J. Chem. Phys. 131, 094502 (2009)] to calculate third order optical response functions and two-dimensional electronic spectra of model dimers. As in our previous work, we have focused on the applicability of several approximate methods related to the HEOM method. We show that while the second order perturbative quantum master equations are generally inaccurate in describing the peak shapes and solvation dynamics, they can give reasonable peak amplitude evolution even in the intermediate coupling regime. The stochastic Liouville equation results in good peak shapes, but does not properly describe the excited state dynamics due to the lack of detailed balance. A modified version of the high temperature approximation to the HEOM gives the best agreement with the exact result.
Three-dimensional inversion of multisource array electromagnetic data
NASA Astrophysics Data System (ADS)
Tartaras, Efthimios
Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.
DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik
2015-11-01
This study was designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and 1 year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal actor-partner interdependence model) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. © 2015 The British Psychological Society.
DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik
2015-01-01
This study is designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and one year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal Actor-Partner Interdependence Models) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. PMID:26402901
Reasons for low influenza vaccination coverage – a cross-sectional survey in Poland
Kardas, Przemyslaw; Zasowska, Anna; Dec, Joanna; Stachurska, Magdalena
2011-01-01
Aim To assess the reasons for low influenza vaccination coverage in Poland, including knowledge of influenza and attitudes toward influenza vaccination. Methods This was a cross-sectional, anonymous, self-administered survey in primary care patients in Lodzkie voivodship (central Poland). The study participants were adults who visited their primary care physicians for various reasons from January 1 to April 30, 2007. Results Six hundred and forty participants completed the survey. In 12 months before the study, 20.8% participants had received influenza vaccination. The most common reasons listed by those who had not been vaccinated were good health (27.6%), lack of trust in vaccination effectiveness (16.8%), and the cost of vaccination (9.7%). The most common source of information about influenza vaccination were primary care physicians (46.6%). Despite reasonably good knowledge of influenza, as many as approximately 20% of participants could not point out any differences between influenza and other viral respiratory tract infections. Conclusions The main reasons for low influenza vaccination coverage in Poland were patients’ misconceptions and the cost of vaccination. Therefore, free-of-charge vaccination and more effective informational campaigns are needed, with special focus on high-risk groups. PMID:21495194
Overstory cohort survival in an Appalachian hardwood deferment cutting: 35-year results
John P. Brown; Melissa A. Thomas-Van Gundy; Thomas M. Schuler
2018-01-01
Deferment cutting is a two-aged regeneration method in which the majority of the stand is harvested and a dispersed component of overstory treesâapproximately 15â20% of the basal area â is retained for at least onehalf rotation and up one full rotation for reasons other than regeneration. Careful consideration of residual trees, in both characteristics and harvesting,...
NASA Astrophysics Data System (ADS)
Vámos, Tibor
The gist of the paper is the fundamental uncertain nature of all kinds of uncertainties and consequently a critical epistemic review of historical and recent approaches, computational methods, algorithms. The review follows the development of the notion from the beginnings of thinking, via the Aristotelian and Skeptic view, the medieval nominalism and the influential pioneering metaphors of ancient India and Persia to the birth of modern mathematical disciplinary reasoning. Discussing the models of uncertainty, e.g. the statistical, other physical and psychological background we reach a pragmatic model related estimation perspective, a balanced application orientation for different problem areas. Data mining, game theories and recent advances in approximation algorithms are discussed in this spirit of modest reasoning.
Measurement of Antenna Bore-Sight Gain
NASA Technical Reports Server (NTRS)
Fortinberry, Jarrod; Shumpert, Thomas
2016-01-01
The absolute or free-field gain of a simple antenna can be approximated using standard antenna theory formulae or for a more accurate prediction, numerical methods may be employed to solve for antenna parameters including gain. Both of these methods will result in relatively reasonable estimates but in practice antenna gain is usually verified and documented via measurements and calibration. In this paper, a relatively simple and low-cost, yet effective means of determining the bore-sight free-field gain of a VHF/UHF antenna is proposed by using the Brewster angle relationship.
Analysis and control of hourglass instabilities in underintegrated linear and nonlinear elasticity
NASA Technical Reports Server (NTRS)
Jacquotte, Olivier P.; Oden, J. Tinsley
1994-01-01
Methods are described to identify and correct a bad finite element approximation of the governing operator obtained when under-integration is used in numerical code for several model problems: the Poisson problem, the linear elasticity problem, and for problems in the nonlinear theory of elasticity. For each of these problems, the reason for the occurrence of instabilities is given, a way to control or eliminate them is presented, and theorems of existence, uniqueness, and convergence for the given methods are established. Finally, numerical results are included which illustrate the theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D.; Wilson, Paul P. H.
In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less
Biondo, Elliott D.; Wilson, Paul P. H.
2017-05-08
In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less
Fuzzy Logic for Incidence Geometry
2016-01-01
The paper presents a mathematical framework for approximate geometric reasoning with extended objects in the context of Geography, in which all entities and their relationships are described by human language. These entities could be labelled by commonly used names of landmarks, water areas, and so forth. Unlike single points that are given in Cartesian coordinates, these geographic entities are extended in space and often loosely defined, but people easily perform spatial reasoning with extended geographic objects “as if they were points.” Unfortunately, up to date, geographic information systems (GIS) miss the capability of geometric reasoning with extended objects. The aim of the paper is to present a mathematical apparatus for approximate geometric reasoning with extended objects that is usable in GIS. In the paper we discuss the fuzzy logic (Aliev and Tserkovny, 2011) as a reasoning system for geometry of extended objects, as well as a basis for fuzzification of the axioms of incidence geometry. The same fuzzy logic was used for fuzzification of Euclid's first postulate. Fuzzy equivalence relation “extended lines sameness” is introduced. For its approximation we also utilize a fuzzy conditional inference, which is based on proposed fuzzy “degree of indiscernibility” and “discernibility measure” of extended points. PMID:27689133
Simulations of sooting turbulent jet flames using a hybrid flamelet/stochastic Eulerian field method
NASA Astrophysics Data System (ADS)
Consalvi, Jean-Louis; Nmira, Fatiha; Burot, Daria
2016-03-01
The stochastic Eulerian field method is applied to simulate 12 turbulent C1-C3 hydrocarbon jet diffusion flames covering a wide range of Reynolds numbers and fuel sooting propensities. The joint scalar probability density function (PDF) is a function of the mixture fraction, enthalpy defect, scalar dissipation rate and representative soot properties. Soot production is modelled by a semi-empirical acetylene/benzene-based soot model. Spectral gas and soot radiation is modelled using a wide-band correlated-k model. Emission turbulent radiation interactions (TRIs) are taken into account by means of the PDF method, whereas absorption TRIs are modelled using the optically thin fluctuation approximation. Model predictions are found to be in reasonable agreement with experimental data in terms of flame structure, soot quantities and radiative loss. Mean soot volume fractions are predicted within a factor of two of the experiments whereas radiant fractions and peaks of wall radiative fluxes are within 20%. The study also aims to assess approximate radiative models, namely the optically thin approximation (OTA) and grey medium approximation. These approximations affect significantly the radiative loss and should be avoided if accurate predictions of the radiative flux are desired. At atmospheric pressure, the relative errors that they produced on the peaks of temperature and soot volume fraction are within both experimental and model uncertainties. However, these discrepancies are found to increase with pressure, suggesting that spectral models describing properly the self-absorption should be considered at over-atmospheric pressure.
Sengupta, Aritra; Foster, Scott D.; Patterson, Toby A.; Bravington, Mark
2012-01-01
Data assimilation is a crucial aspect of modern oceanography. It allows the future forecasting and backward smoothing of ocean state from the noisy observations. Statistical methods are employed to perform these tasks and are often based on or related to the Kalman filter. Typically Kalman filters assumes that the locations associated with observations are known with certainty. This is reasonable for typical oceanographic measurement methods. Recently, however an alternative and abundant source of data comes from the deployment of ocean sensors on marine animals. This source of data has some attractive properties: unlike traditional oceanographic collection platforms, it is relatively cheap to collect, plentiful, has multiple scientific uses and users, and samples areas of the ocean that are often difficult of costly to sample. However, inherent uncertainty in the location of the observations is a barrier to full utilisation of animal-borne sensor data in data-assimilation schemes. In this article we examine this issue and suggest a simple approximation to explicitly incorporate the location uncertainty, while staying in the scope of Kalman-filter-like methods. The approximation stems from a Taylor-series approximation to elements of the updating equation. PMID:22900005
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
On the unreasonable effectiveness of the post-Newtonian approximation in gravitational physics
Will, Clifford M.
2011-01-01
The post-Newtonian approximation is a method for solving Einstein’s field equations for physical systems in which motions are slow compared to the speed of light and where gravitational fields are weak. Yet it has proven to be remarkably effective in describing certain strong-field, fast-motion systems, including binary pulsars containing dense neutron stars and binary black hole systems inspiraling toward a final merger. The reasons for this effectiveness are largely unknown. When carried to high orders in the post-Newtonian sequence, predictions for the gravitational-wave signal from inspiraling compact binaries will play a key role in gravitational-wave detection by laser-interferometric observatories. PMID:21447714
NASA Technical Reports Server (NTRS)
Oran, W. A.; Reiss, D. A.; Berge, L. H.; Parker, H. W.
1979-01-01
The acoustic fields and levitation forces produced along the axis of a single-axis resonance system were measured. The system consisted of a St. Clair generator and a planar reflector. The levitation force was measured for bodies of various sizes and geometries (i.e., spheres, cylinders, and discs). The force was found to be roughly proportional to the volume of the body until the characteristic body radius reaches approximately 2/k (k = wave number). The acoustic pressures along the axis were modeled using Huygens principle and a method of imaging to approximate multiple reflections. The modeled pressures were found to be in reasonable agreement with those measured with a calibrated microphone.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Heuristic analogy in Ars Conjectandi: From Archimedes' De Circuli Dimensione to Bernoulli's theorem.
Campos, Daniel G
2018-02-01
This article investigates the way in which Jacob Bernoulli proved the main mathematical theorem that undergirds his art of conjecturing-the theorem that founded, historically, the field of mathematical probability. It aims to contribute a perspective into the question of problem-solving methods in mathematics while also contributing to the comprehension of the historical development of mathematical probability. It argues that Bernoulli proved his theorem by a process of mathematical experimentation in which the central heuristic strategy was analogy. In this context, the analogy functioned as an experimental hypothesis. The article expounds, first, Bernoulli's reasoning for proving his theorem, describing it as a process of experimentation in which hypothesis-making is crucial. Next, it investigates the analogy between his reasoning and Archimedes' approximation of the value of π, by clarifying both Archimedes' own experimental approach to the said approximation and its heuristic influence on Bernoulli's problem-solving strategy. The discussion includes some general considerations about analogy as a heuristic technique to make experimental hypotheses in mathematics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Proportional Reasoning and the Visually Impaired
ERIC Educational Resources Information Center
Hilton, Geoff; Hilton, Annette; Dole, Shelley L.; Goos, Merrilyn; O'Brien, Mia
2012-01-01
Proportional reasoning is an important aspect of formal thinking that is acquired during the developmental years that approximate the middle years of schooling. Students who fail to acquire sound proportional reasoning often experience difficulties in subjects that require quantitative thinking, such as science, technology, engineering, and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Formanek, Martin; Vana, Martin; Houfek, Karel
2010-09-30
We compare efficiency of two methods for numerical solution of the time-dependent Schroedinger equation, namely the Chebyshev method and the recently introduced generalized Crank-Nicholson method. As a testing system the free propagation of a particle in one dimension is used. The space discretization is based on the high-order finite diferences to approximate accurately the kinetic energy operator in the Hamiltonian. We show that the choice of the more effective method depends on how many wave functions must be calculated during the given time interval to obtain relevant and reasonably accurate information about the system, i.e. on the choice of themore » time step.« less
Meta-regression approximations to reduce publication selection bias.
Stanley, T D; Doucouliagos, Hristos
2014-03-01
Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy. Copyright © 2013 John Wiley & Sons, Ltd.
Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions
NASA Technical Reports Server (NTRS)
Gilland, James H.
1991-01-01
The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.
Boitard, Simon; Loisel, Patrice
2007-05-01
The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.
Vipsita, Swati; Rath, Santanu Kumar
2015-01-01
Protein superfamily classification deals with the problem of predicting the family membership of newly discovered amino acid sequence. Although many trivial alignment methods are already developed by previous researchers, but the present trend demands the application of computational intelligent techniques. As there is an exponential growth in size of biological database, retrieval and inference of essential knowledge in the biological domain become a very cumbersome task. This problem can be easily handled using intelligent techniques due to their ability of tolerance for imprecision, uncertainty, approximate reasoning, and partial truth. This paper discusses the various global and local features extracted from full length protein sequence which are used for the approximation and generalisation of the classifier. The various parameters used for evaluating the performance of the classifiers are also discussed. Therefore, this review article can show right directions to the present researchers to make an improvement over the existing methods.
Medicare Part D Claims Rejections for Nursing Home Residents, 2006 to 2010
Stevenson, David G.; Keohane, Laura M.; Mitchell, Susan L.; Zarowitz, Barbara J.; Huskamp, Haiden A.
2013-01-01
Objectives Much has been written about trends in Medicare Part D formulary design and consumers’ choice of plans, but little is known about the magnitude of claims rejections or their clinical and administrative implications. Our objective was to study the overall rate at which Part D claims are rejected, whether these rates differ across plans, drugs, and medication classes, and how these rejection rates and reasons have evolved over time. Study Design and Methods We performed descriptive analyses of data on paid and rejected Part D claims submitted by 1 large national long-term care pharmacy from 2006 to 2010. In each of the 5 study years, data included approximately 450,000 Medicare beneficiaries living in long-term care settings with approximately 4 million Part D drug claims. Claims rejection rates and reasons for rejection are tabulated for each study year at the plan, drug, and class levels. Results Nearly 1 in 6 drug claims was rejected during the first 5 years of the Medicare Part D program, and this rate has increased over time. Rejection rates and reasons for rejection varied substantially across drug products and Part D plans. Moreover, the reasons for denials evolved over our study period. Coverage has become less of a factor in claims rejections than it was initially and other formulary tools such as drug utilization review, quantity-related coverage limits, and prior authorization are increasingly used to deny claims. Conclusions Examining claims rejection rates can provide important supplemental information to assess plans’ generosity of coverage and to identify potential areas of concern. PMID:23145808
Toward Webscale, Rule-Based Inference on the Semantic Web Via Data Parallelism
2013-02-01
Another work distinct from its peers is the work on approximate reasoning by Rudolph et al. [34] in which multiple inference sys- tems were combined not...Workshop Scalable Semantic Web Knowledge Base Systems, 2010, pp. 17–31. [34] S. Rudolph , T. Tserendorj, and P. Hitzler, “What is approximate reasoning...2013] [55] M. Duerst and M. Suignard. (2005, Jan .). RFC 3987 – internationalized resource identifiers (IRIs). IETF. [Online]. Available: http
Testing actinide fission yield treatment in CINDER90 for use in MCNP6 burnup calculations
Fensin, Michael Lorne; Umbel, Marissa
2015-09-18
Most of the development of the MCNPX/6 burnup capability focused on features that were applied to the Boltzman transport or used to prepare coefficients for use in CINDER90, with little change to CINDER90 or the CINDER90 data. Though a scheme exists for best solving the coupled Boltzman and Bateman equations, the most significant approximation is that the employed nuclear data are correct and complete. Thus, the CINDER90 library file contains 60 different actinide fission yields encompassing 36 fissionable actinides (thermal, fast, high energy and spontaneous fission). Fission reaction data exists for more than 60 actinides and as a result, fissionmore » yield data must be approximated for actinides that do not possess fission yield information. Several types of approximations are used for estimating fission yields for actinides which do not possess explicit fission yield data. The objective of this study is to test whether or not certain approximations of fission yield selection have any impact on predictability of major actinides and fission products. Further we assess which other fission products, available in MCNP6 Tier 3, result in the largest difference in production. Because the CINDER90 library file is in ASCII format and therefore easily amendable, we assess reasons for choosing, as well as compare actinide and major fission product prediction for the H. B. Robinson benchmark for, three separate fission yield selection methods: (1) the current CINDER90 library file method (Base); (2) the element method (Element); and (3) the isobar method (Isobar). Results show that the three methods tested result in similar prediction of major actinides, Tc-99 and Cs-137; however, certain fission products resulted in significantly different production depending on the method of choice.« less
Measuring Distance of Fuzzy Numbers by Trapezoidal Fuzzy Numbers
NASA Astrophysics Data System (ADS)
Hajjari, Tayebeh
2010-11-01
Fuzzy numbers and more generally linguistic values are approximate assessments, given by experts and accepted by decision-makers when obtaining value that is more accurate is impossible or unnecessary. Distance between two fuzzy numbers plays an important role in linguistic decision-making. It is reasonable to define a fuzzy distance between fuzzy objects. To achieve this aim, the researcher presents a new distance measure for fuzzy numbers by means of improved centroid distance method. The metric properties are also studied. The advantage is the calculation of the proposed method is far simple than previous approaches.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Fletcher, Logan; Carruthers, Peter
2012-01-01
This article considers the cognitive architecture of human meta-reasoning: that is, metacognition concerning one's own reasoning and decision-making. The view we defend is that meta-reasoning is a cobbled-together skill comprising diverse self-management strategies acquired through individual and cultural learning. These approximate the monitoring-and-control functions of a postulated adaptive system for metacognition by recruiting mechanisms that were designed for quite other purposes. PMID:22492753
38 CFR 3.102 - Reasonable doubt.
Code of Federal Regulations, 2010 CFR
2010-07-01
... degree of disability, or any other point, such doubt will be resolved in favor of the claimant. By reasonable doubt is meant one which exists because of an approximate balance of positive and negative...
DFT calculations of electronic and optical properties of SrS with LDA, GGA and mGGA functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Shatendra, E-mail: shatendra@gmai.com; Sharma, Jyotsna; Sharma, Yogita
2016-05-06
The theoretical investigations of electronic and optical properties of SrS are made using the first principle DFT calculations. The calculations are performed for the local-density approximation (LDA), generalized gradient approximation (GGA) and for an alternative form of GGA i.e. metaGGA for both rock salt type (B1, Fm3m) and cesium chloride (B2, Pm3m) structures. The band structure, density of states and optical spectra are calculated under various available functional. The calculations with LDA and GGA functional underestimate the values of band gaps with all functional, however the values with mGGA show reasonably good agreement with experimental and those calculated by usingmore » other methods.« less
Generation of tunable laser sidebands in the far-infrared region
NASA Technical Reports Server (NTRS)
Farhoomand, J.; Frerking, M. A.; Pickett, H. M.; Blake, G. A.
1985-01-01
In recent years, several techniques have been developed for the generation of tunable coherent radiation at submillimeter and far-infrared (FIR) wavelengths. The harmonic generation of conventional microwave sources has made it possible to produce spectrometers capable of continuous operation to above 1000 GHz. However, the sensitivity of such instruments drops rapidly with frequency. For this reason, a great deal of attention is given to laser-based methods, which could cover the entire FIR region. Tunable FIR radiation (approximately 100 nW) has been produced by mixing FIR molecular lasers and conventional microwave sources in both open and closed mixer mounts. The present investigation is concerned with improvements in this approach. These improvements provide approximately thirty times more output power than previous results.
NASA Technical Reports Server (NTRS)
Wheatley, John B
1935-01-01
This report presents an extension of the autogiro theory of Glauert and Lock in which the influence of a pitch varying with the blade radius is evaluated and methods of approximating the effect of blade tip losses and the influence of reversed velocities on the retreating blades are developed. A comparison of calculated and experimental results showed that most of the rotor characteristics could be calculated with reasonable accuracy, and that the type of induced flow assumed has a secondary effect upon the net rotor forces, although the flapping motion is influenced appreciably. An approximate evaluation of the effect of parasite drag on the rotor blades established the importance of including this factor in the analysis.
Ripple, Dean C; Montgomery, Christopher B; Hu, Zhishang
2015-02-01
Accurate counting and sizing of protein particles has been limited by discrepancies of counts obtained by different methods. To understand the bias and repeatability of techniques in common use in the biopharmaceutical community, the National Institute of Standards and Technology has conducted an interlaboratory comparison for sizing and counting subvisible particles from 1 to 25 μm. Twenty-three laboratories from industry, government, and academic institutions participated. The circulated samples consisted of a polydisperse suspension of abraded ethylene tetrafluoroethylene particles, which closely mimic the optical contrast and morphology of protein particles. For restricted data sets, agreement between data sets was reasonably good: relative standard deviations (RSDs) of approximately 25% for light obscuration counts with lower diameter limits from 1 to 5 μm, and approximately 30% for flow imaging with specified manufacturer and instrument setting. RSDs of the reported counts for unrestricted data sets were approximately 50% for both light obscuration and flow imaging. Differences between instrument manufacturers were not statistically significant for light obscuration but were significant for flow imaging. We also report a method for accounting for differences in the reported diameter for flow imaging and electrical sensing zone techniques; the method worked well for diameters greater than 15 μm. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Laser induced heat source distribution in bio-tissues
NASA Astrophysics Data System (ADS)
Li, Xiaoxia; Fan, Shifu; Zhao, Youquan
2006-09-01
During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.
A method for calculating aerodynamic heating on sounding rocket tangent ogive noses.
NASA Technical Reports Server (NTRS)
Wing, L. D.
1973-01-01
A method is presented for calculating the aerodynamic heating and shear stresses at the wall for tangent ogive noses that are slender enough to maintain an attached nose shock through that portion of flight during which heat transfer from the boundary layer to the wall is significant. The lower entropy of the attached nose shock combined with the inclusion of the streamwise pressure gradient yields a reasonable estimate of the actual flow conditions. Both laminar and turbulent boundary layers are examined and an approximation of the effects of (up to) moderate angles-of-attack is included in the analysis. The analytical method has been programmed in FORTRAN IV for an IBM 360/91 computer.
A method for calculating aerodynamic heating on sounding rocket tangent ogive noses
NASA Technical Reports Server (NTRS)
Wing, L. D.
1972-01-01
A method is presented for calculating the aerodynamic heating and shear stresses at the wall for tangent ogive noses that are slender enough to maintain an attached nose shock through that portion of flight during which heat transfer from the boundary layer to the wall is significant. The lower entropy of the attached nose shock combined with the inclusion of the streamwise pressure gradient yields a reasonable estimate of the actual flow conditions. Both laminar and turbulent boundary layers are examined and an approximation of the effects of (up to) moderate angles-of-attack is included in the analysis. The analytical method has been programmed in FORTRAN 4 for an IBM 360/91 computer.
Discrimination of Mixed Taste Solutions using Ultrasonic Wave and Soft Computing
NASA Astrophysics Data System (ADS)
Kojima, Yohichiro; Kimura, Futoshi; Mikami, Tsuyoshi; Kitama, Masataka
In this study, ultrasonic wave acoustic properties of mixed taste solutions were investigated, and the possibility of taste sensing based on the acoustical properties obtained was examined. In previous studies, properties of solutions were discriminated based on sound velocity, amplitude and frequency characteristics of ultrasonic waves propagating through the five basic taste solutions and marketed beverages. However, to make this method applicable to beverages that contain many taste substances, further studies are required. In this paper, the waveform of an ultrasonic wave with frequency of approximately 5 MHz propagating through mixed solutions composed of sweet and salty substance was measured. As a result, differences among solutions were clearly observed as differences in their properties. Furthermore, these mixed solutions were discriminated by a self-organizing neural network. The ratio of volume in their mixed solutions was estimated by a distance-type fuzzy reasoning method. Therefore, the possibility of taste sensing was shown by using ultrasonic wave acoustic properties and the soft computing, such as the self-organizing neural network and the distance-type fuzzy reasoning method.
A comparative study of an ABC and an artificial absorber for truncating finite element meshes
NASA Technical Reports Server (NTRS)
Oezdemir, T.; Volakis, John L.
1993-01-01
The type of mesh termination used in the context of finite element formulations plays a major role on the efficiency and accuracy of the field solution. The performance of an absorbing boundary condition (ABC) and an artificial absorber (a new concept) for terminating the finite element mesh was evaluated. This analysis is done in connection with the problem of scattering by a finite slot array in a thick ground plane. The two approximate mesh truncation schemes are compared with the exact finite element-boundary integral (FEM-BI) method in terms of accuracy and efficiency. It is demonstrated that both approximate truncation schemes yield reasonably accurate results even when the mesh is extended only 0.3 wavelengths away from the array aperture. However, the artificial absorber termination method leads to a substantially more efficient solution. Moreover, it is shown that the FEM-BI method remains quite competitive with the FEM-artificial absorber method when the FFT is used for computing the matrix-vector products in the iterative solution algorithm. These conclusions are indeed surprising and of major importance in electromagnetic simulations based on the finite element method.
Heuristic errors in clinical reasoning.
Rylander, Melanie; Guerrasio, Jeannette
2016-08-01
Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.
A variable vertical resolution weather model with an explicitly resolved planetary boundary layer
NASA Technical Reports Server (NTRS)
Helfand, H. M.
1981-01-01
A version of the fourth order weather model incorporating surface wind stress data from SEASAT A scatterometer observations is presented. The Monin-Obukhov similarity theory is used to relate winds at the top of the surface layer to surface wind stress. A reasonable approximation of surface fluxes of heat, moisture, and momentum are obtainable using this method. A Richardson number adjustment scheme based on the ideas of Chang is used to allow for turbulence effects.
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-01-27
Here, we estimate the capacity value of concentrating solar power (CSP) plants without thermal energy storage in the southwestern U.S. Our results show that CSP plants have capacity values that are between 45% and 95% of maximum capacity, depending on their location and configuration. We also examine the sensitivity of the capacity value of CSP to a number of factors and show that capacity factor-based methods can provide reasonable approximations of reliability-based estimates.
Survey of HEPA filter experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbaugh, E.H.
1982-07-01
A survey of high efficiency particulate air (HEPA) filter applications and experience at Department of Energy (DOE) sites was conducted to provide an overview of the reasons and magnitude of HEPA filter changeouts and failures. Results indicated that approximately 58% of the filters surveyed were changed out in the three year study period, and some 18% of all filters were changed out more than once. Most changeouts (63%) were due to the existence of a high pressure drop across the filter, indicative of filter plugging. Other reasons for changeout included leak-test failure (15%), preventive maintenance service life limit (13%), suspectedmore » damage (5%) and radiation buildup (4%). Filter failures occurred with approximately 12% of all installed filters. Of these failures, most (64%) occurred for unknown or unreported reasons. Handling or installation damage accounted for an additional 19% of reported failures. Media ruptures, filter-frame failures and seal failures each accounted for approximately 5 to 6% of the reported failures.« less
Bifurcations in models of a society of reasonable contrarians and conformists
NASA Astrophysics Data System (ADS)
Bagnoli, Franco; Rechtman, Raúl
2015-10-01
We study models of a society composed of a mixture of conformist and reasonable contrarian agents that at any instant hold one of two opinions. Conformists tend to agree with the average opinion of their neighbors and reasonable contrarians tend to disagree, but revert to a conformist behavior in the presence of an overwhelming majority, in line with psychological experiments. The model is studied in the mean-field approximation and on small-world and scale-free networks. In the mean-field approximation, a large fraction of conformists triggers a polarization of the opinions, a pitchfork bifurcation, while a majority of reasonable contrarians leads to coherent oscillations, with an alternation of period-doubling and pitchfork bifurcations up to chaos. Similar scenarios are obtained by changing the fraction of long-range rewiring and the parameter of scale-free networks related to the average connectivity.
Bifurcations in models of a society of reasonable contrarians and conformists.
Bagnoli, Franco; Rechtman, Raúl
2015-10-01
We study models of a society composed of a mixture of conformist and reasonable contrarian agents that at any instant hold one of two opinions. Conformists tend to agree with the average opinion of their neighbors and reasonable contrarians tend to disagree, but revert to a conformist behavior in the presence of an overwhelming majority, in line with psychological experiments. The model is studied in the mean-field approximation and on small-world and scale-free networks. In the mean-field approximation, a large fraction of conformists triggers a polarization of the opinions, a pitchfork bifurcation, while a majority of reasonable contrarians leads to coherent oscillations, with an alternation of period-doubling and pitchfork bifurcations up to chaos. Similar scenarios are obtained by changing the fraction of long-range rewiring and the parameter of scale-free networks related to the average connectivity.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-16
... under Export Control Classification Number (``ECCN'') 0A982, controlled for Crime Control reasons, and..., classified under ECCN 0A982, controlled for Crime Control reasons, and valued at approximately $112, from the... kit, items classified under ECCN 0A982, controlled for Crime Control reasons, and valued at...
Taking stock of medication wastage: Unused medications in US households.
Law, Anandi V; Sakharkar, Prashant; Zargarzadeh, Amir; Tai, Bik Wai Bilvick; Hess, Karl; Hata, Micah; Mireles, Rudolph; Ha, Carolyn; Park, Tony J
2015-01-01
Despite the potential deleterious impact on patient safety, environmental safety and health care expenditures, the extent of unused prescription medications in US households and reasons for nonuse remain unknown. To estimate the extent, type and cost of unused medications and the reasons for their nonuse among US households. A cross-sectional, observational two-phased study was conducted using a convenience sample in Southern California. A web-based survey (Phase I, n = 238) at one health sciences institution and paper-based survey (Phase II, n = 68) at planned drug take-back events at three community pharmacies were conducted. The extent, type, and cost of unused medications and the reasons for their nonuse were collected. Approximately 2 of 3 prescription medications were reported unused; disease/condition improved (42.4%), forgetfulness (5.8%) and side effects (6.5%) were reasons cited for their nonuse. "Throwing medications in the trash" was found being the common method of disposal (63%). In phase I, pain medications (23.3%) and antibiotics (18%) were most commonly reported as unused, whereas in Phase II, 17% of medications for chronic conditions (hypertension, diabetes, cholesterol, heart disease) and 8.3% for mental health problems were commonly reported as unused. Phase II participants indicated pharmacy as a preferred location for drug disposal. The total estimated cost for unused medications was approximately $59,264.20 (average retail Rx price) to $152,014.89 (AWP) from both phases, borne largely by private health insurance. When extrapolated to a national level, it was approximately $2.4B for elderly taking five prescription medications to $5.4B for the 52% of US adults who take one prescription medication daily. Two out of three dispensed medications were unused, with national projected costs ranging from $2.4B to $5.4B. This wastage raises concerns about adherence, cost and safety; additionally, it points to the need for public awareness and policy to reduce wastage. Pharmacists can play an important role by educating patients both on appropriate medication use and disposal. Copyright © 2015 Elsevier Inc. All rights reserved.
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
Silvestrelli, Pier Luigi; Ambrosetti, Alberto
2014-03-28
The Density Functional Theory (DFT)/van der Waals-Quantum Harmonic Oscillator-Wannier function (vdW-QHO-WF) method, recently developed to include the vdW interactions in approximated DFT by combining the quantum harmonic oscillator model with the maximally localized Wannier function technique, is applied to the cases of atoms and small molecules (X=Ar, CO, H2, H2O) weakly interacting with benzene and with the ideal planar graphene surface. Comparison is also presented with the results obtained by other DFT vdW-corrected schemes, including PBE+D, vdW-DF, vdW-DF2, rVV10, and by the simpler Local Density Approximation (LDA) and semilocal generalized gradient approximation approaches. While for the X-benzene systems all the considered vdW-corrected schemes perform reasonably well, it turns out that an accurate description of the X-graphene interaction requires a proper treatment of many-body contributions and of short-range screening effects, as demonstrated by adopting an improved version of the DFT/vdW-QHO-WF method. We also comment on the widespread attitude of relying on LDA to get a rough description of weakly interacting systems.
Fast computation of the electrolyte-concentration transfer function of a lithium-ion cell model
NASA Astrophysics Data System (ADS)
Rodríguez, Albert; Plett, Gregory L.; Trimboli, M. Scott
2017-08-01
One approach to creating physics-based reduced-order models (ROMs) of battery-cell dynamics requires first generating linearized Laplace-domain transfer functions of all cell internal electrochemical variables of interest. Then, the resulting infinite-dimensional transfer functions can be reduced by various means in order to find an approximate low-dimensional model. These methods include Padé approximation or the Discrete-Time Realization algorithm. In a previous article, Lee and colleagues developed a transfer function of the electrolyte concentration for a porous-electrode pseudo-two-dimensional lithium-ion cell model. Their approach used separation of variables and Sturm-Liouville theory to compute an infinite-series solution to the transfer function, which they then truncated to a finite number of terms for reasons of practicality. Here, we instead use a variation-of-parameters approach to arrive at a different representation of the identical solution that does not require a series expansion. The primary benefits of the new approach are speed of computation of the transfer function and the removal of the requirement to approximate the transfer function by truncating the number of terms evaluated. Results show that the speedup of the new method can be more than 3800.
Quasiparticle self-consistent GW method for the spectral properties of complex materials.
Bruneval, Fabien; Gatti, Matteo
2014-01-01
The GW approximation to the formally exact many-body perturbation theory has been applied successfully to materials for several decades. Since the practical calculations are extremely cumbersome, the GW self-energy is most commonly evaluated using a first-order perturbative approach: This is the so-called G 0 W 0 scheme. However, the G 0 W 0 approximation depends heavily on the mean-field theory that is employed as a basis for the perturbation theory. Recently, a procedure to reach a kind of self-consistency within the GW framework has been proposed. The quasiparticle self-consistent GW (QSGW) approximation retains some positive aspects of a self-consistent approach, but circumvents the intricacies of the complete GW theory, which is inconveniently based on a non-Hermitian and dynamical self-energy. This new scheme allows one to surmount most of the flaws of the usual G 0 W 0 at a moderate calculation cost and at a reasonable implementation burden. In particular, the issues of small band gap semiconductors, of large band gap insulators, and of some transition metal oxides are then cured. The QSGW method broadens the range of materials for which the spectral properties can be predicted with confidence.
Logo recognition in video by line profile classification
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Hanjalic, Alan
2003-12-01
We present an extension to earlier work on recognizing logos in video stills. The logo instances considered here are rigid planar objects observed at a distance in the scene, so the possible perspective transformation can be approximated by an affine transformation. For this reason we can classify the logos by matching (invariant) line profiles. We enhance our previous method by considering multiple line profiles instead of a single profile of the logo. The positions of the lines are based on maxima in the Hough transform space of the segmented logo foreground image. Experiments are performed on MPEG1 sport video sequences to show the performance of the proposed method.
NASA Astrophysics Data System (ADS)
Tao, Guohua
2017-07-01
A general theoretical framework is derived for the recently developed multi-state trajectory (MST) approach from the time dependent Schrödinger equation, resulting in equations of motion for coupled nuclear-electronic dynamics equivalent to Hamilton dynamics or Heisenberg equation based on a new multistate Meyer-Miller (MM) model. The derived MST formalism incorporates both diabatic and adiabatic representations as limiting cases and reduces to Ehrenfest or Born-Oppenheimer dynamics in the mean-field or the single-state limits, respectively. In the general multistate formalism, nuclear dynamics is represented in terms of a set of individual state-specific trajectories, while in the active state trajectory (AST) approximation, only one single nuclear trajectory on the active state is propagated with its augmented images running on all other states. The AST approximation combines the advantages of consistent nuclear-coupled electronic dynamics in the MM model and the single nuclear trajectory in the trajectory surface hopping (TSH) treatment and therefore may provide a potential alternative to both Ehrenfest and TSH methods. The resulting algorithm features in a consistent description of coupled electronic-nuclear dynamics and excellent numerical stability. The implementation of the MST approach to several benchmark systems involving multiple nonadiabatic transitions and conical intersection shows reasonably good agreement with exact quantum calculations, and the results in both representations are similar in accuracy. The AST treatment also reproduces the exact results reasonably, sometimes even quantitatively well, with a better performance in the adiabatic representation.
Automatic Detection of Driver Fatigue Using Driving Operation Information for Transportation Safety
Li, Zuojin; Chen, Liukui; Peng, Jun; Wu, Ying
2017-01-01
Fatigued driving is a major cause of road accidents. For this reason, the method in this paper is based on the steering wheel angles (SWA) and yaw angles (YA) information under real driving conditions to detect drivers’ fatigue levels. It analyzes the operation features of SWA and YA under different fatigue statuses, then calculates the approximate entropy (ApEn) features of a short sliding window on time series. Using the nonlinear feature construction theory of dynamic time series, with the fatigue features as input, designs a “2-6-6-3” multi-level back propagation (BP) Neural Networks classifier to realize the fatigue detection. An approximately 15-h experiment is carried out on a real road, and the data retrieved are segmented and labeled with three fatigue levels after expert evaluation, namely “awake”, “drowsy” and “very drowsy”. The average accuracy of 88.02% in fatigue identification was achieved in the experiment, endorsing the value of the proposed method for engineering applications. PMID:28587072
Automatic Detection of Driver Fatigue Using Driving Operation Information for Transportation Safety.
Li, Zuojin; Chen, Liukui; Peng, Jun; Wu, Ying
2017-05-25
Fatigued driving is a major cause of road accidents. For this reason, the method in this paper is based on the steering wheel angles (SWA) and yaw angles (YA) information under real driving conditions to detect drivers' fatigue levels. It analyzes the operation features of SWA and YA under different fatigue statuses, then calculates the approximate entropy (ApEn) features of a short sliding window on time series. Using the nonlinear feature construction theory of dynamic time series, with the fatigue features as input, designs a "2-6-6-3" multi-level back propagation (BP) Neural Networks classifier to realize the fatigue detection. An approximately 15-h experiment is carried out on a real road, and the data retrieved are segmented and labeled with three fatigue levels after expert evaluation, namely "awake", "drowsy" and "very drowsy". The average accuracy of 88.02% in fatigue identification was achieved in the experiment, endorsing the value of the proposed method for engineering applications.
Counterfactual reasoning: From childhood to adulthood
Rafetseder, Eva; Schwitalla, Maria; Perner, Josef
2013-01-01
The objective of this study was to describe the developmental progression of counterfactual reasoning from childhood to adulthood. In contrast to the traditional view, it was recently reported by Rafetseder and colleagues that even a majority of 6-year-old children do not engage in counterfactual reasoning when asked counterfactual questions (Child Development, 2010, Vol. 81, pp. 376–389). By continuing to use the same method, the main result of the current Study 1 was that performance of the 9- to 11-year-olds was comparable to that of the 6-year-olds, whereas the 12- to 14-year-olds approximated adult performance. Study 2, using an intuitively simpler task based on Harris and colleagues (Cognition, 1996, Vol. 61, pp. 233–259), resulted in a similar conclusion, specifically that the ability to apply counterfactual reasoning is not fully developed in all children before 12 years of age. We conclude that children who failed our tasks seem to lack an understanding of what needs to be changed (events that are causally dependent on the counterfactual assumption) and what needs to be left unchanged and so needs to be kept as it actually happened. Alternative explanations, particularly executive functioning, are discussed in detail. PMID:23219156
NASA Astrophysics Data System (ADS)
Nepal, Niraj K.; Ruzsinszky, Adrienn; Bates, Jefferson E.
2018-03-01
The ground state structural and energetic properties for rocksalt and cesium chloride phases of the cesium halides were explored using the random phase approximation (RPA) and beyond-RPA methods to benchmark the nonempirical SCAN meta-GGA and its empirical dispersion corrections. The importance of nonadditivity and higher-order multipole moments of dispersion in these systems is discussed. RPA generally predicts the equilibrium volume for these halides within 2.4% of the experimental value, while beyond-RPA methods utilizing the renormalized adiabatic LDA (rALDA) exchange-correlation kernel are typically within 1.8%. The zero-point vibrational energy is small and shows that the stability of these halides is purely due to electronic correlation effects. The rAPBE kernel as a correction to RPA overestimates the equilibrium volume and could not predict the correct phase ordering in the case of cesium chloride, while the rALDA kernel consistently predicted results in agreement with the experiment for all of the halides. However, due to its reasonable accuracy with lower computational cost, SCAN+rVV10 proved to be a good alternative to the RPA-like methods for describing the properties of these ionic solids.
NASA Astrophysics Data System (ADS)
Yang, Bing; Liao, Zhen; Qin, Yahang; Wu, Yayun; Liang, Sai; Xiao, Shoune; Yang, Guangwu; Zhu, Tao
2017-05-01
To describe the complicated nonlinear process of the fatigue short crack evolution behavior, especially the change of the crack propagation rate, two different calculation methods are applied. The dominant effective short fatigue crack propagation rates are calculated based on the replica fatigue short crack test with nine smooth funnel-shaped specimens and the observation of the replica films according to the effective short fatigue cracks principle. Due to the fast decay and the nonlinear approximation ability of wavelet analysis, the self-learning ability of neural network, and the macroscopic searching and global optimization of genetic algorithm, the genetic wavelet neural network can reflect the implicit complex nonlinear relationship when considering multi-influencing factors synthetically. The effective short fatigue cracks and the dominant effective short fatigue crack are simulated and compared by the Genetic Wavelet Neural Network. The simulation results show that Genetic Wavelet Neural Network is a rational and available method for studying the evolution behavior of fatigue short crack propagation rate. Meanwhile, a traditional data fitting method for a short crack growth model is also utilized for fitting the test data. It is reasonable and applicable for predicting the growth rate. Finally, the reason for the difference between the prediction effects by these two methods is interpreted.
Analytical approximations for the oscillators with anti-symmetric quadratic nonlinearity
NASA Astrophysics Data System (ADS)
Alal Hosen, Md.; Chowdhury, M. S. H.; Yeakub Ali, Mohammad; Faris Ismail, Ahmad
2017-12-01
A second-order ordinary differential equation involving anti-symmetric quadratic nonlinearity changes sign. The behaviour of the oscillators with an anti-symmetric quadratic nonlinearity is assumed to oscillate different in the positive and negative directions. In this reason, Harmonic Balance Method (HBM) cannot be directly applied. The main purpose of the present paper is to propose an analytical approximation technique based on the HBM for obtaining approximate angular frequencies and the corresponding periodic solutions of the oscillators with anti-symmetric quadratic nonlinearity. After applying HBM, a set of complicated nonlinear algebraic equations is found. Analytical approach is not always fruitful for solving such kinds of nonlinear algebraic equations. In this article, two small parameters are found, for which the power series solution produces desired results. Moreover, the amplitude-frequency relationship has also been determined in a novel analytical way. The presented technique gives excellent results as compared with the corresponding numerical results and is better than the existing ones.
An analytical method of estimating turbine performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1949-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.
1989-10-31
fo tmaa OmfuogeM ara Mmi. fal in fM?05V~ ~ ~ ~ ~ ~ A D A 2 4 0409"~ n ugt Psoo,@’ oducbof Proton (07044 136M. WagaWapN. DC 20141 T1 3. REPORT TYPE...Al (circumscription, non- monotonic reasoning, and default reasoning), our approach is based on fuzzy logic and, more specifically, on the theory of
Fully decoupled monolithic projection method for natural convection problems
NASA Astrophysics Data System (ADS)
Pan, Xiaomin; Kim, Kyoungyoun; Lee, Changhoon; Choi, Jung-Il
2017-04-01
To solve time-dependent natural convection problems, we propose a fully decoupled monolithic projection method. The proposed method applies the Crank-Nicolson scheme in time and the second-order central finite difference in space. To obtain a non-iterative monolithic method from the fully discretized nonlinear system, we first adopt linearizations of the nonlinear convection terms and the general buoyancy term with incurring second-order errors in time. Approximate block lower-upper decompositions, along with an approximate factorization technique, are additionally employed to a global linearly coupled system, which leads to several decoupled subsystems, i.e., a fully decoupled monolithic procedure. We establish global error estimates to verify the second-order temporal accuracy of the proposed method for velocity, pressure, and temperature in terms of a discrete l2-norm. Moreover, according to the energy evolution, the proposed method is proved to be stable if the time step is less than or equal to a constant. In addition, we provide numerical simulations of two-dimensional Rayleigh-Bénard convection and periodic forced flow. The results demonstrate that the proposed method significantly mitigates the time step limitation, reduces the computational cost because only one Poisson equation is required to be solved, and preserves the second-order temporal accuracy for velocity, pressure, and temperature. Finally, the proposed method reasonably predicts a three-dimensional Rayleigh-Bénard convection for different Rayleigh numbers.
An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1994-01-01
This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.
NASA Astrophysics Data System (ADS)
Kacprzak, T.; Herbel, J.; Amara, A.; Réfrégier, A.
2018-02-01
Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.
An approximation function for frequency constrained structural optimization
NASA Technical Reports Server (NTRS)
Canfield, R. A.
1989-01-01
The purpose is to examine a function for approximating natural frequency constraints during structural optimization. The nonlinearity of frequencies has posed a barrier to constructing approximations for frequency constraints of high enough quality to facilitate efficient solutions. A new function to represent frequency constraints, called the Rayleigh Quotient Approximation (RQA), is presented. Its ability to represent the actual frequency constraint results in stable convergence with effectively no move limits. The objective of the optimization problem is to minimize structural weight subject to some minimum (or maximum) allowable frequency and perhaps subject to other constraints such as stress, displacement, and gage size, as well. A reason for constraining natural frequencies during design might be to avoid potential resonant frequencies due to machinery or actuators on the structure. Another reason might be to satisy requirements of an aircraft or spacecraft's control law. Whatever the structure supports may be sensitive to a frequency band that must be avoided. Any of these situations or others may require the designer to insure the satisfaction of frequency constraints. A further motivation for considering accurate approximations of natural frequencies is that they are fundamental to dynamic response constraints.
Magnetic probing of the solar interior
NASA Technical Reports Server (NTRS)
Benton, E. R.; Estes, R. H.
1985-01-01
The magnetic field patterns in the region beneath the solar photosphere is determined. An approximate method for downward extrapolation of line of sight magnetic field measurements taken at the solar photosphere was developed. It utilizes the mean field theory of electromagnetism in a form thought to be appropriate for the solar convection zone. A way to test that theory is proposed. The straightforward application of the lowest order theory with the complete model fit to these data does not indicate the existence of any reasonable depth at which flux conservation is achieved.
Scattering by ensembles of small particles experiment, theory and application
NASA Technical Reports Server (NTRS)
Gustafson, B. A. S.
1980-01-01
A hypothetical self consistent picture of evolution of prestellar intertellar dust through a comet phase leads to predictions about the composition of the circum-solar dust cloud. Scattering properties of thus resulting conglomerates with a bird's-nest type of structure are investigated using a micro-wave analogue technique. Approximate theoretical methods of general interest are developed which compared favorably with the experimental results. The principal features of scattering of visible radiation by zodiacal light particles are reasonably reproduced. A component which is suggestive of (ALPHA)-meteoroids is also predicted.
NASA Technical Reports Server (NTRS)
Loane, J. T.; Bowhill, S. A.; Mayes, P. E.
1982-01-01
The effects of atmospheric turbulence and the basis for the coherent scatter radar techniques are discussed. The reasons are given for upgrading the Radar system to a larger steerable array. Phase array theory pertinent to the system design is reviewed, along with approximations for maximum directive gain and blind angles due to mutual coupling. The methods and construction techniques employed in the UHF model study are explained. The antenna range is described, with a block diagram for the mode of operation used.
NASA Technical Reports Server (NTRS)
Hartung, Lin C.
1991-01-01
A method for predicting radiation adsorption and emission coefficients in thermochemical nonequilibrium flows is developed. The method is called the Langley optimized radiative nonequilibrium code (LORAN). It applies the smeared band approximation for molecular radiation to produce moderately detailed results and is intended to fill the gap between detailed but costly prediction methods and very fast but highly approximate methods. The optimization of the method to provide efficient solutions allowing coupling to flowfield solvers is discussed. Representative results are obtained and compared to previous nonequilibrium radiation methods, as well as to ground- and flight-measured data. Reasonable agreement is found in all cases. A multidimensional radiative transport method is also developed for axisymmetric flows. Its predictions for wall radiative flux are 20 to 25 percent lower than those of the tangent slab transport method, as expected, though additional investigation of the symmetry and outflow boundary conditions is indicated. The method was applied to the peak heating condition of the aeroassist flight experiment (AFE) trajectory, with results comparable to predictions from other methods. The LORAN method was also applied in conjunction with the computational fluid dynamics (CFD) code LAURA to study the sensitivity of the radiative heating prediction to various models used in nonequilibrium CFD. This study suggests that radiation measurements can provide diagnostic information about the detailed processes occurring in a nonequilibrium flowfield because radiation phenomena are very sensitive to these processes.
An experiment-based comparative study of fuzzy logic control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Chen, Yung-Yaw; Lee, Chuen-Chein; Murugesan, S.; Jang, Jyh-Shing
1989-01-01
An approach is presented to the control of a dynamic physical system through the use of approximate reasoning. The approach has been implemented in a program named POLE, and the authors have successfully built a prototype hardware system to solve the cartpole balancing problem in real-time. The approach provides a complementary alternative to the conventional analytical control methodology and is of substantial use when a precise mathematical model of the process being controlled is not available. A set of criteria for comparing controllers based on approximate reasoning and those based on conventional control schemes is furnished.
SCF-Xα-SW electron densities with the overlapping sphere approximation
NASA Astrophysics Data System (ADS)
McMaster, Blair N.; Smith, Vedene H., Jr.; Salahub, Dennis R.
Self consistent field-Xα-scattered wave (SCF-Xα-SW) calculations have been performed for a series of eight first and second row homonuclear diatomic molecules using both the touching (TS) and 25 per cent overlapping sphere (OS) versions. The OS deformation density maps exhibit much better quantitative agreement with those from other Xα methods, which do not employ the spherical muffin-tin (MT) potential approximation, than do the TS maps. The OS version thus compensates very effectively for the errors involved in the MT approximation in computing electron densities. A detailed comparison between the TS- and OS-Xα-SW orbitals reveals that the reasons for this improvement are surprisingly specific. The dominant effect of the OS approximation is to increase substantially the electron density near the midpoint of bonding σ orbitals, with a consequent reduction of the density behind the atoms. A similar effect occurs for the bonding π orbitals but is less pronounced. These effects are due to a change in hybridization of the orbitals, with the OS approximation increasing the proportion of the subdominant partial waves and hence changing the shapes of the orbitals. It is this increased orbital polarization which so effectively compensates for the lack of (non-spherically symmetric) polarization components in the MT potential, when overlapping spheres are used.
A reinforcement learning-based architecture for fuzzy logic control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1992-01-01
This paper introduces a new method for learning to refine a rule-based fuzzy logic controller. A reinforcement learning technique is used in conjunction with a multilayer neural network model of a fuzzy controller. The approximate reasoning based intelligent control (ARIC) architecture proposed here learns by updating its prediction of the physical system's behavior and fine tunes a control knowledge base. Its theory is related to Sutton's temporal difference (TD) method. Because ARIC has the advantage of using the control knowledge of an experienced operator and fine tuning it through the process of learning, it learns faster than systems that train networks from scratch. The approach is applied to a cart-pole balancing system.
A new approach to estimate parameters of speciation models with application to apes.
Becquet, Celine; Przeworski, Molly
2007-10-01
How populations diverge and give rise to distinct species remains a fundamental question in evolutionary biology, with important implications for a wide range of fields, from conservation genetics to human evolution. A promising approach is to estimate parameters of simple speciation models using polymorphism data from multiple loci. Existing methods, however, make a number of assumptions that severely limit their applicability, notably, no gene flow after the populations split and no intralocus recombination. To overcome these limitations, we developed a new Markov chain Monte Carlo method to estimate parameters of an isolation-migration model. The approach uses summaries of polymorphism data at multiple loci surveyed in a pair of diverging populations or closely related species and, importantly, allows for intralocus recombination. To illustrate its potential, we applied it to extensive polymorphism data from populations and species of apes, whose demographic histories are largely unknown. The isolation-migration model appears to provide a reasonable fit to the data. It suggests that the two chimpanzee species became reproductively isolated in allopatry approximately 850 Kya, while Western and Central chimpanzee populations split approximately 440 Kya but continued to exchange migrants. Similarly, Eastern and Western gorillas and Sumatran and Bornean orangutans appear to have experienced gene flow since their splits approximately 90 and over 250 Kya, respectively.
Sternal approximation for bilateral anterolateral transsternal thoracotomy for lung transplantation.
McGiffin, David C; Alonso, Jorge E; Zorn, George L; Kirklin, James K; Young, K Randall; Wille, Keith M; Leon, Kevin; Hart, Katherine
2005-02-01
The traditional incision for bilateral sequential lung transplantation is the bilateral anterolateral transsternal thoracotomy with approximation of the sternal fragments with interrupted stainless steel wire loops; this technique may be associated with an unacceptable incidence of postoperative sternal disruption causing chronic pain and deformity. Approximation of the sternal ends was achieved with peristernal cables that passed behind the sternum two intercostal spaces above and below the sternal division, which were then passed through metal sleeves in front of the sternum, the cables tensioned, and the sleeves then crimped. Forty-seven patients underwent sternal closure with this method, and satisfactory bone union occurred in all patients. Six patients underwent removal of the peristernal cables: 1 for infection (with satisfactory bone union after the removal of the cables), 3 for cosmetic reasons, 1 during the performance of a median sternotomy for an aortic valve replacement, and 1 in a patient who requested removal before commencing participation in football. This technique of peristernal cable approximation of sternal ends has successfully eliminated the problem of sternal disruption associated with this incision and is a useful alternative for preventing this complication after bilateral lung transplantation.
Attitude of the Saudi community towards heart donation, transplantation, and artificial hearts.
AlHabeeb, Waleed; AlAyoubi, Fakhr; Tash, Adel; AlAhmari, Leenah; AlHabib, Khalid F
2017-07-01
To understand the attitudes of the Saudi population towards heart donation and transplantation. Methods: A survey using a questionnaire addressing attitudes towards organ transplantation and donation was conducted across 18 cities in Saudi Arabia between September 2015 and March 2016. Results: A total of 1250 respondents participated in the survey. Of these, approximately 91% agree with the concept of organ transplantation but approximately 17% do not agree with the concept of heart transplantation; 42.4% of whom reject heart transplants for religious reasons. Only 43.6% of respondents expressed a willingness to donate their heart and approximately 58% would consent to the donation of a relative's organ after death. A total of 59.7% of respondents believe that organ donation is regulated and 31.8% fear that the doctors will not try hard enough to save their lives if they consent to organ donation. Approximately 77% believe the heart is removed while the donor is alive; although, the same proportion of respondents thought they knew what brain death meant. Conclusion: In general, the Saudi population seem to accept the concept of transplantation and are willing to donate, but still hold some reservations towards heart donation.
Applicability and Limitations of Reliability Allocation Methods
NASA Technical Reports Server (NTRS)
Cruz, Jose A.
2016-01-01
Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.
Weeks, James L
2006-06-01
The Mine Safety and Health Administration (MSHA) proposes to issue citations for non-compliance with the exposure limit for respirable coal mine dust when measured exposure exceeds the exposure limit with a "high degree of confidence." This criterion threshold value (CTV) is derived from the sampling and analytical error of the measurement method. This policy is based on a combination of statistical and legal reasoning: the one-tailed 95% confidence limit of the sampling method, the apparent principle of due process and a standard of proof analogous to "beyond a reasonable doubt." This policy raises the effective exposure limit, it is contrary to the precautionary principle, it is not a fair sharing of the burden of uncertainty, and it employs an inappropriate standard of proof. Its own advisory committee and NIOSH have advised against this policy. For longwall mining sections, it results in a failure to issue citations for approximately 36% of the measured values that exceed the statutory exposure limit. Citations for non-compliance with the respirable dust standard should be issued for any measure exposure that exceeds the exposure limit.
NASA Astrophysics Data System (ADS)
Kruis, Nathanael J. F.
Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer from concave foundation shapes, which has not been adequately addressed to date. Within the Kiva framework, three-dimensional heat transfer that can require several days to simulate is approximated in two-dimensions in a matter of seconds while maintaining a mean absolute deviation within 3%.
Missing value imputation strategies for metabolomics data.
Armitage, Emily Grace; Godzien, Joanna; Alonso-Herranz, Vanesa; López-Gonzálvez, Ángeles; Barbas, Coral
2015-12-01
The origin of missing values can be caused by different reasons and depending on these origins missing values should be considered differently and dealt with in different ways. In this research, four methods of imputation have been compared with respect to revealing their effects on the normality and variance of data, on statistical significance and on the approximation of a suitable threshold to accept missing data as truly missing. Additionally, the effects of different strategies for controlling familywise error rate or false discovery and how they work with the different strategies for missing value imputation have been evaluated. Missing values were found to affect normality and variance of data and k-means nearest neighbour imputation was the best method tested for restoring this. Bonferroni correction was the best method for maximizing true positives and minimizing false positives and it was observed that as low as 40% missing data could be truly missing. The range between 40 and 70% missing values was defined as a "gray area" and therefore a strategy has been proposed that provides a balance between the optimal imputation strategy that was k-means nearest neighbor and the best approximation of positioning real zeros. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Uncertainty management by relaxation of conflicting constraints in production process scheduling
NASA Technical Reports Server (NTRS)
Dorn, Juergen; Slany, Wolfgang; Stary, Christian
1992-01-01
Mathematical-analytical methods as used in Operations Research approaches are often insufficient for scheduling problems. This is due to three reasons: the combinatorial complexity of the search space, conflicting objectives for production optimization, and the uncertainty in the production process. Knowledge-based techniques, especially approximate reasoning and constraint relaxation, are promising ways to overcome these problems. A case study from an industrial CIM environment, namely high-grade steel production, is presented to demonstrate how knowledge-based scheduling with the desired capabilities could work. By using fuzzy set theory, the applied knowledge representation technique covers the uncertainty inherent in the problem domain. Based on this knowledge representation, a classification of jobs according to their importance is defined which is then used for the straightforward generation of a schedule. A control strategy which comprises organizational, spatial, temporal, and chemical constraints is introduced. The strategy supports the dynamic relaxation of conflicting constraints in order to improve tentative schedules.
Comparison of Direct Solar Energy to Resistance Heating for Carbothermal Reduction of Regolith
NASA Technical Reports Server (NTRS)
Muscatello, Anthony C.; Gustafson, Robert J.
2011-01-01
A comparison of two methods of delivering thermal energy to regolith for the carbo thermal reduction process has been performed. The comparison concludes that electrical resistance heating is superior to direct solar energy via solar concentrators for the following reasons: (1) the resistance heating method can process approximately 12 times as much regolith using the same amount of thermal energy as the direct solar energy method because of superior thermal insulation; (2) the resistance heating method is more adaptable to nearer-term robotic exploration precursor missions because it does not require a solar concentrator system; (3) crucible-based methods are more easily adapted to separation of iron metal and glass by-products than direct solar energy because the melt can be poured directly after processing instead of being remelted; and (4) even with projected improvements in the mass of solar concentrators, projected photovoltaic system masses are expected to be even lower.
A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction
Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R.; Buenrostro-Mariscal, Raymundo
2017-01-01
There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. PMID:28391241
A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R; Buenrostro-Mariscal, Raymundo
2017-06-07
There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. Copyright © 2017 Montesinos-López et al.
Errors Using Observational Methods for Ergonomics Assessment in Real Practice.
Diego-Mas, Jose-Antonio; Alcaide-Marzal, Jorge; Poveda-Bautista, Rocio
2017-12-01
The degree in which practitioners use the observational methods for musculoskeletal disorder risks assessment correctly was evaluated. Ergonomics assessment is a key issue for the prevention and reduction of work-related musculoskeletal disorders in workplaces. Observational assessment methods appear to be better matched to the needs of practitioners than direct measurement methods, and for this reason, they are the most widely used techniques in real work situations. Despite the simplicity of observational methods, those responsible for assessing risks using these techniques should have some experience and know-how in order to be able to use them correctly. We analyzed 442 risk assessments of actual jobs carried out by 290 professionals from 20 countries to determine their reliability. The results show that approximately 30% of the assessments performed by practitioners had errors. In 13% of the assessments, the errors were severe and completely invalidated the results of the evaluation. Despite the simplicity of observational method, approximately 1 out of 3 assessments conducted by practitioners in actual work situations do not adequately evaluate the level of potential musculoskeletal disorder risks. This study reveals a problem that suggests greater effort is needed to ensure that practitioners possess better knowledge of the techniques used to assess work-related musculoskeletal disorder risks and that laws and regulations should be stricter as regards qualifications and skills required by professionals.
An n -material thresholding method for improving integerness of solutions in topology optimization
Watts, Seth; Tortorelli, Daniel A.
2016-04-10
It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, themore » canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.« less
Elliptical optical solitary waves in a finite nematic liquid crystal cell
NASA Astrophysics Data System (ADS)
Minzoni, Antonmaria A.; Sciberras, Luke W.; Smyth, Noel F.; Worthy, Annette L.
2015-05-01
The addition of orbital angular momentum has been previously shown to stabilise beams of elliptic cross-section. In this article the evolution of such elliptical beams is explored through the use of an approximate methodology based on modulation theory. An approximate method is used as the equations that govern the optical system have no known exact solitary wave solution. This study brings to light two distinct phases in the evolution of a beam carrying orbital angular momentum. The two phases are determined by the shedding of radiation in the form of mass loss and angular momentum loss. The first phase is dominated by the shedding of angular momentum loss through spiral waves. The second phase is dominated by diffractive radiation loss which drives the elliptical solitary wave to a steady state. In addition to modulation theory, the "chirp" variational method is also used to study this evolution. Due to the significant role radiation loss plays in the evolution of an elliptical solitary wave, an attempt is made to couple radiation loss to the chirp variational method. This attempt furthers understanding as to why radiation loss cannot be coupled to the chirp method. The basic reason for this is that there is no consistent manner to match the chirp trial function to the generated radiating waves which is uniformly valid in time. Finally, full numerical solutions of the governing equations are compared with solutions obtained using the various variational approximations, with the best agreement achieved with modulation theory due to its ability to include both mass and angular momentum losses to shed diffractive radiation.
Polynomial Approximation of Functions: Historical Perspective and New Tools
ERIC Educational Resources Information Center
Kidron, Ivy
2003-01-01
This paper examines the effect of applying symbolic computation and graphics to enhance students' ability to move from a visual interpretation of mathematical concepts to formal reasoning. The mathematics topics involved, Approximation and Interpolation, were taught according to their historical development, and the students tried to follow the…
Module Extraction for Efficient Object Queries over Ontologies with Large ABoxes
Xu, Jia; Shironoshita, Patrick; Visser, Ubbo; John, Nigel; Kabuka, Mansur
2015-01-01
The extraction of logically-independent fragments out of an ontology ABox can be useful for solving the tractability problem of querying ontologies with large ABoxes. In this paper, we propose a formal definition of an ABox module, such that it guarantees complete preservation of facts about a given set of individuals, and thus can be reasoned independently w.r.t. the ontology TBox. With ABox modules of this type, isolated or distributed (parallel) ABox reasoning becomes feasible, and more efficient data retrieval from ontology ABoxes can be attained. To compute such an ABox module, we present a theoretical approach and also an approximation for SHIQ ontologies. Evaluation of the module approximation on different types of ontologies shows that, on average, extracted ABox modules are significantly smaller than the entire ABox, and the time for ontology reasoning based on ABox modules can be improved significantly. PMID:26848490
Metrics for Labeled Markov Systems
NASA Technical Reports Server (NTRS)
Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash
1999-01-01
Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.
Automatic segmentation of relevant structures in DCE MR mammograms
NASA Astrophysics Data System (ADS)
Koenig, Matthias; Laue, Hendrik; Boehler, Tobias; Peitgen, Heinz-Otto
2007-03-01
The automatic segmentation of relevant structures such as skin edge, chest wall, or nipple in dynamic contrast enhanced MR imaging (DCE MRI) of the breast provides additional information for computer aided diagnosis (CAD) systems. Automatic reporting using BI-RADS criteria benefits of information about location of those structures. Lesion positions can be automatically described relatively to such reference structures for reporting purposes. Furthermore, this information can assist data reduction for computation expensive preprocessing such as registration, or for visualization of only the segments of current interest. In this paper, a novel automatic method for determining the air-breast boundary resp. skin edge, for approximation of the chest wall, and locating of the nipples is presented. The method consists of several steps which are built on top of each other. Automatic threshold computation leads to the air-breast boundary which is then analyzed to determine the location of the nipple. Finally, results of both steps are starting point for approximation of the chest wall. The proposed process was evaluated on a large data set of DCE MRI recorded by T1 sequences and yielded reasonable results in all cases.
Designing of skull defect implants using C1 rational cubic Bezier and offset curves
NASA Astrophysics Data System (ADS)
Mohamed, Najihah; Majid, Ahmad Abd; Piah, Abd Rahni Mt; Rajion, Zainul Ahmad
2015-05-01
Some of the reasons to construct skull implant are due to head trauma after an accident or an injury or an infection or because of tumor invasion or when autogenous bone is not suitable for replacement after a decompressive craniectomy (DC). The main objective of our study is to develop a simple method to redesign missing parts of the skull. The procedure begins with segmentation, data approximation, and estimation process of the outer wall by a C1 continuous curve. Its offset curve is used to generate the inner wall. A metaheuristic algorithm, called harmony search (HS) is a derivative-free real parameter optimization algorithm inspired from the musical improvisation process of searching for a perfect state of harmony. In this study, data approximation by a rational cubic Bézier function uses HS to optimize position of middle points and value of the weights. All the phases contribute significantly in making our proposed technique automated. Graphical examples of several postoperative skulls are displayed to show the effectiveness of our proposed method.
Comparison of heaving buoy and oscillating flap wave energy converters
NASA Astrophysics Data System (ADS)
Abu Bakar, Mohd Aftar; Green, David A.; Metcalfe, Andrew V.; Najafian, G.
2013-04-01
Waves offer an attractive source of renewable energy, with relatively low environmental impact, for communities reasonably close to the sea. Two types of simple wave energy converters (WEC), the heaving buoy WEC and the oscillating flap WEC, are studied. Both WECs are considered as simple energy converters because they can be modelled, to a first approximation, as single degree of freedom linear dynamic systems. In this study, we estimate the response of both WECs to typical wave inputs; wave height for the buoy and corresponding wave surge for the flap, using spectral methods. A nonlinear model of the oscillating flap WEC that includes the drag force, modelled by the Morison equation is also considered. The response to a surge input is estimated by discrete time simulation (DTS), using central difference approximations to derivatives. This is compared with the response of the linear model obtained by DTS and also validated using the spectral method. Bendat's nonlinear system identification (BNLSI) technique was used to analyze the nonlinear dynamic system since the spectral analysis was only suitable for linear dynamic system. The effects of including the nonlinear term are quantified.
NASA Astrophysics Data System (ADS)
Shadid, J. N.; Smith, T. M.; Cyr, E. C.; Wildey, T. M.; Pawlowski, R. P.
2016-09-01
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier-Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Graves, R. A., Jr.
1973-01-01
A method for the rapid calculation of the inviscid shock layer about blunt axisymmetric bodies at an angle of attack of 0 deg has been developed. The procedure is of an inverse nature, that is, a shock wave is assumed and calculations proceed along rays normal to the shock. The solution is iterated until the given body is computed. The flow field solution procedure is programed at the Langley Research Center for the Control Data 6600 computer. The geometries specified in the program are sphores, ellipsoids, paraboloids, and hyperboloids which may conical afterbodies. The normal momentum equation is replaced with an approximate algebraic expression. This simplification significantly reduces machine computation time. Comparisons of the present results with shock shapes and surface pressure distributions obtained by the more exact methods indicate that the program provides reasonably accurate results for smooth bodies in axisymmetric flow. However, further research is required to establish the proper approximate form of the normal momentum equation for the two-dimensional case.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, J.N., E-mail: jnshadi@sandia.gov; Department of Mathematics and Statistics, University of New Mexico; Smith, T.M.
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts tomore » apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, J. N.; Smith, T. M.; Cyr, E. C.
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less
Shadid, J. N.; Smith, T. M.; Cyr, E. C.; ...
2016-05-20
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Ando, David; Singh, Jahnavi; Keasling, Jay D.; García Martín, Héctor
2018-01-01
Determination of internal metabolic fluxes is crucial for fundamental and applied biology because they map how carbon and electrons flow through metabolism to enable cell function. 13C Metabolic Flux Analysis (13C MFA) and Two-Scale 13C Metabolic Flux Analysis (2S-13C MFA) are two techniques used to determine such fluxes. Both operate on the simplifying approximation that metabolic flux from peripheral metabolism into central “core” carbon metabolism is minimal, and can be omitted when modeling isotopic labeling in core metabolism. The validity of this “two-scale” or “bow tie” approximation is supported both by the ability to accurately model experimental isotopic labeling data, and by experimentally verified metabolic engineering predictions using these methods. However, the boundaries of core metabolism that satisfy this approximation can vary across species, and across cell culture conditions. Here, we present a set of algorithms that (1) systematically calculate flux bounds for any specified “core” of a genome-scale model so as to satisfy the bow tie approximation and (2) automatically identify an updated set of core reactions that can satisfy this approximation more efficiently. First, we leverage linear programming to simultaneously identify the lowest fluxes from peripheral metabolism into core metabolism compatible with the observed growth rate and extracellular metabolite exchange fluxes. Second, we use Simulated Annealing to identify an updated set of core reactions that allow for a minimum of fluxes into core metabolism to satisfy these experimental constraints. Together, these methods accelerate and automate the identification of a biologically reasonable set of core reactions for use with 13C MFA or 2S-13C MFA, as well as provide for a substantially lower set of flux bounds for fluxes into the core as compared with previous methods. We provide an open source Python implementation of these algorithms at https://github.com/JBEI/limitfluxtocore. PMID:29300340
Development of a polysilicon process based on chemical vapor deposition, phase 1 and phase 2
NASA Technical Reports Server (NTRS)
Plahutnik, F.; Arvidson, A.; Sawyer, D.; Sharp, K.
1982-01-01
High-purity polycrystalline silicon was produced in an experimental, intermediate and advanced CVD reactor. Data from the intermediate and advanced reactors confirmed earlier results obtained in the experimental reactor. Solar cells were fabricated by Westinghouse Electric and Applied Solar Research Corporation which met or exceeded baseline cell efficiencies. Feedstocks containing trichlorosilane or silicon tetrachloride are not viable as etch promoters to reduce silicon deposition on bell jars. Neither are they capable of meeting program goals for the 1000 MT/yr plant. Post-run CH1 etch was found to be a reasonably effective method of reducing silicon deposition on bell jars. Using dichlorosilane as feedstock met the low-cost solar array deposition goal (2.0 gh-1-cm-1), however, conversion efficiency was approximately 10% lower than the targeted value of 40 mole percent (32 to 36% achieved), and power consumption was approximately 20 kWh/kg over target at the reactor.
Approximating local observables on projected entangled pair states
NASA Astrophysics Data System (ADS)
Schwarz, M.; Buerschaper, O.; Eisert, J.
2017-06-01
Tensor network states are for good reasons believed to capture ground states of gapped local Hamiltonians arising in the condensed matter context, states which are in turn expected to satisfy an entanglement area law. However, the computational hardness of contracting projected entangled pair states in two- and higher-dimensional systems is often seen as a significant obstacle when devising higher-dimensional variants of the density-matrix renormalization group method. In this work, we show that for those projected entangled pair states that are expected to provide good approximations of such ground states of local Hamiltonians, one can compute local expectation values in quasipolynomial time. We therefore provide a complexity-theoretic justification of why state-of-the-art numerical tools work so well in practice. We finally turn to the computation of local expectation values on quantum computers, providing a meaningful application for a small-scale quantum computer.
Modelling default and likelihood reasoning as probabilistic reasoning
NASA Technical Reports Server (NTRS)
Buntine, Wray
1990-01-01
A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. Likely and by default are in fact treated as duals in the same sense as possibility and necessity. To model these four forms probabilistically, a qualitative default probabilistic (QDP) logic and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequent results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barna, B.A.; Ginn, R.F.
1985-05-01
In computer programs which perform shortcut calculations for multicomponent distillation, the Gilliland correlation continues to be used even though errors of up to 60% (compared with rigorous plate-to-plate calculations) were shown by Erbar and Maddox. Average absolute differences were approximately 30% for Gilliland's correlation versus 4% for the Erbar-Maddox method. The reason the Gilliland correlation continues to be used appears to be due to the availability of an equation by Eduljee which facilitates the correlation's use in computer program. A new equation is presented in this paper that represents the Erbar-Maddox correlation of trays with reflux for multicomponent distillation. Atmore » low reflux ratios, results show more trays are needed than would be estimated by Gilliland's method.« less
The possible modifications of the HISSE model for pure LANDSAT agricultural data
NASA Technical Reports Server (NTRS)
Peters, C.
1981-01-01
A method for relaxing the assumption of class conditional independence of LANDSAT spectral measurements within the same patch (field) is discussed. Theoretical arguments are given which show that any significant refinement of the model beyond this proposal will not allow the reduction, essential to HISSE, of the pure data to patch summary statistics. A slight alteration of the new model is shown to be a reasonable approximation to the model which describes pure data elements from the same patch as jointly Gaussian with a covariance function which exhibits exponential decay with respect to spatial separation.
Refining fuzzy logic controllers with machine learning
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1994-01-01
In this paper, we describe the GARIC (Generalized Approximate Reasoning-Based Intelligent Control) architecture, which learns from its past performance and modifies the labels in the fuzzy rules to improve performance. It uses fuzzy reinforcement learning which is a hybrid method of fuzzy logic and reinforcement learning. This technology can simplify and automate the application of fuzzy logic control to a variety of systems. GARIC has been applied in simulation studies of the Space Shuttle rendezvous and docking experiments. It has the potential of being applied in other aerospace systems as well as in consumer products such as appliances, cameras, and cars.
An optimal design of wind turbine and ship structure based on neuro-response surface method
NASA Astrophysics Data System (ADS)
Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young
2015-07-01
The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.
Plasma measurement by optical visualization and triple probe method under high-speed impact
NASA Astrophysics Data System (ADS)
Sakai, T.; Umeda, K.; Kinoshita, S.; Watanabe, K.
2017-02-01
High-speed impact on spacecraft by space debris poses a threat. When a high-speed projectile collides with target, it is conceivable that the heat created by impact causes severe damage at impact point. Investigation of the temperature is necessary for elucidation of high-speed impact phenomena. However, it is very difficult to measure the temperature with standard methods for two main reasons. One reason is that a thermometer placed on the target is instantaneously destroyed upon impact. The other reason is that there is not enough time resolution to measure the transient temperature changes. In this study, the measurement of plasma induced by high-speed impact was investigated to estimate temperature changes near the impact point. High-speed impact experiments were performed with a vertical gas gun. The projectile speed was approximately 700 m/s, and the target material was A5052. The experimental data to calculate the plasma parameters of electron temperature and electron density were measured by triple probe method. In addition, the diffusion behavior of plasma was observed by optical visualization technique using high-speed camera. The frame rate and the exposure time were 260 kfps and 1.0 μs, respectively. These images are considered to be one proof to show the validity of plasma measurement. The experimental results showed that plasma signals were detected for around 70 μs, and the rising phase of the wave form was in good agreement with timing of optical visualization image when the plasma arrived at the tip of triple probe.
A new treatment of nonlocality in scattering process
NASA Astrophysics Data System (ADS)
Upadhyay, N. J.; Bhagwat, A.; Jain, B. K.
2018-01-01
Nonlocality in the scattering potential leads to an integro-differential equation. In this equation nonlocality enters through an integral over the nonlocal potential kernel. The resulting Schrödinger equation is usually handled by approximating r,{r}{\\prime }-dependence of the nonlocal kernel. The present work proposes a novel method to solve the integro-differential equation. The method, using the mean value theorem of integral calculus, converts the nonhomogeneous term to a homogeneous term. The effective local potential in this equation turns out to be energy independent, but has relative angular momentum dependence. This method is accurate and valid for any form of nonlocality. As illustrative examples, the total and differential cross sections for neutron scattering off 12C, 56Fe and 100Mo nuclei are calculated with this method in the low energy region (up to 10 MeV) and are found to be in reasonable accord with the experiments.
A cross-sectional study of well water arsenic and child IQ in Maine schoolchildren
2014-01-01
Background In recent studies in Bangladesh and elsewhere, exposure to arsenic (As) via drinking water is negatively associated with performance-related aspects of child intelligence (e.g., Perceptual Reasoning, Working Memory) after adjustment for social factors. Because findings are not easily generalizable to the US, we examine this relation in a US population. Methods In 272 children in grades 3–5 from three Maine school districts, we examine associations between drinking water As (WAs) and intelligence (WISC-IV). Results On average, children had resided in their current home for 7.3 years (approximately 75% of their lives). In unadjusted analyses, household well WAs is associated with decreased scores on most WISC-IV Indices. With adjustment for maternal IQ and education, HOME environment, school district and number of siblings, WAs remains significantly negatively associated with Full Scale IQ and Perceptual Reasoning, Working Memory and Verbal Comprehension scores. Compared to those with WAs < 5 μg/L, exposure to WAs ≥ 5 μg/L was associated with reductions of approximately 5–6 points in both Full Scale IQ (p < 0.01) and most Index scores (Perceptual Reasoning, Working Memory, Verbal Comprehension, all p’s < 0.05). Both maternal IQ and education were associated with lower levels of WAs, possibly reflecting behaviors (e.g., water filters, residential choice) limiting exposure. Both WAs and maternal measures were associated with school district. Conclusions The magnitude of the association between WAs and child IQ raises the possibility that levels of WAs ≥ 5 μg/L, levels that are not uncommon in the United States, pose a threat to child development. PMID:24684736
Accuracy of theory for calculating electron impact ionization of molecules
NASA Astrophysics Data System (ADS)
Chaluvadi, Hari Hara Kumar
The study of electron impact single ionization of atoms and molecules has provided valuable information about fundamental collisions. The most detailed information is obtained from triple differential cross sections (TDCS) in which the energy and momentum of all three final state particles are determined. These cross sections are much more difficult for theory since the detailed kinematics of the experiment become important. There are many theoretical approximations for ionization of molecules. One of the successful methods is the molecular 3-body distorted wave (M3DW) approximation. One of the strengths of the DW approximation is that it can be applied for any energy and any size molecule. One of the approximations that has been made to significantly reduce the required computer time is the OAMO (orientation averaged molecular orbital) approximation. In this dissertation, the accuracy of the M3DW-OAMO is tested for different molecules. Surprisingly, the M3DW-OAMO approximation yields reasonably good agreement with experiment for ionization of H2 and N2. On the other hand, the M3DW-OAMO results for ionization of CH4, NH3 and DNA derivative molecules did not agree very well with experiment. Consequently, we proposed the M3DW with a proper average (PA) calculation. In this dissertation, it is shown that the M3DW-PA calculations for CH4 and SF6 are in much better agreement with experimental data than the M3DW-OAMO results.
An improved method for predicting the effects of flight on jet mixing noise
NASA Technical Reports Server (NTRS)
Stone, J. R.
1979-01-01
The NASA method (1976) for predicting the effects of flight on jet mixing noise was improved. The earlier method agreed reasonably well with experimental flight data for jet velocities up to about 520 m/sec (approximately 1700 ft/sec). The poorer agreement at high jet velocities appeared to be due primarily to the manner in which supersonic convection effects were formulated. The purely empirical supersonic convection formulation of the earlier method was replaced by one based on theoretical considerations. Other improvements of an empirical nature included were based on model-jet/free-jet simulated flight tests. The revised prediction method is presented and compared with experimental data obtained from the Bertin Aerotrain with a J85 engine, the DC-10 airplane with JT9D engines, and the DC-9 airplane with refanned JT8D engines. It is shown that the new method agrees better with the data base than a recently proposed SAE method.
Registration of organs with sliding interfaces and changing topologies
NASA Astrophysics Data System (ADS)
Berendsen, Floris F.; Kotte, Alexis N. T. J.; Viergever, Max A.; Pluim, Josien P. W.
2014-03-01
Smoothness and continuity assumptions on the deformation field in deformable image registration do not hold for applications where the imaged objects have sliding interfaces. Recent extensions to deformable image registration that accommodate for sliding motion of organs are limited to sliding motion along approximately planar surfaces or cannot model sliding that changes the topological configuration in case of multiple organs. We propose a new extension to free-form image registration that is not limited in this way. Our method uses a transformation model that consists of uniform B-spline transformations for each organ region separately, which is based on segmentation of one image. Since this model can create overlapping regions or gaps between regions, we introduce a penalty term that minimizes this undesired effect. The penalty term acts on the surfaces of the organ regions and is optimized simultaneously with the image similarity. To evaluate our method registrations were performed on publicly available inhale-exhale CT scans for which performances of other methods are known. Target registration errors are computed on dense landmark sets that are available with these datasets. On these data our method outperforms the other methods in terms of target registration error and, where applicable, also in terms of overlap and gap volumes. The approximation of the other methods of sliding motion along planar surfaces is reasonably well suited for the motion present in the lung data. The ability of our method to handle sliding along curved boundaries and for changing region topology configurations was demonstrated on synthetic images.
Modelling default and likelihood reasoning as probabilistic
NASA Technical Reports Server (NTRS)
Buntine, Wray
1990-01-01
A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.
Parrish, Randall R; Thirlwall, Matthew F; Pickford, Chris; Horstwood, Matthew; Gerdes, Axel; Anderson, James; Coggon, David
2006-02-01
Accidental exposure to depleted or enriched uranium may occur in a variety of circumstances. There is a need to quantify such exposure, with the possibility that the testing may post-date exposure by months or years. Therefore, it is important to develop a very sensitive test to measure precisely the isotopic composition of uranium in urine at low levels of concentration. The results of an interlaboratory comparison using sector field (SF)-inductively coupled plasma-mass spectrometry (ICP-MS) and multiple collector (MC)-ICP-MS for the measurement of uranium concentration and U/U and U/U isotopic ratios of human urine samples are presented. Three urine samples were verified to contain uranium at 1-5 ng L and shown to have natural uranium isotopic composition. Portions of these urine batches were doped with depleted uranium (DU) containing small quantities of U, and the solutions were split into 100 mL and 400 mL aliquots that were subsequently measured blind by three laboratories. All methods investigated were able to measure accurately U/U with precisions of approximately 0.5% to approximately 4%, but only selected MC-ICP-MS methods were capable of consistently analyzing U/U to reasonable precision at the approximately 20 fg L level of U abundance. Isotope dilution using a U tracer demonstrates the ability to measure concentrations to better than +/-4% with the MC-ICP-MS method, though sample heterogeneity in urine samples was shown to be problematic in some cases. MC-ICP-MS outperformed SF-ICP-MS methods, as was expected. The MC-ICP-MS methodology described is capable of measuring to approximately 1% precision the U/U of any sample of human urine over the entire range of uranium abundance down to <1 ng L, and detecting very small amounts of DU contained therein.
Learning deep similarity in fundus photography
NASA Astrophysics Data System (ADS)
Chudzik, Piotr; Al-Diri, Bashir; Caliva, Francesco; Ometto, Giovanni; Hunter, Andrew
2017-02-01
Similarity learning is one of the most fundamental tasks in image analysis. The ability to extract similar images in the medical domain as part of content-based image retrieval (CBIR) systems has been researched for many years. The vast majority of methods used in CBIR systems are based on hand-crafted feature descriptors. The approximation of a similarity mapping for medical images is difficult due to the big variety of pixel-level structures of interest. In fundus photography (FP) analysis, a subtle difference in e.g. lesions and vessels shape and size can result in a different diagnosis. In this work, we demonstrated how to learn a similarity function for image patches derived directly from FP image data without the need of manually designed feature descriptors. We used a convolutional neural network (CNN) with a novel architecture adapted for similarity learning to accomplish this task. Furthermore, we explored and studied multiple CNN architectures. We show that our method can approximate the similarity between FP patches more efficiently and accurately than the state-of- the-art feature descriptors, including SIFT and SURF using a publicly available dataset. Finally, we observe that our approach, which is purely data-driven, learns that features such as vessels calibre and orientation are important discriminative factors, which resembles the way how humans reason about similarity. To the best of authors knowledge, this is the first attempt to approximate a visual similarity mapping in FP.
2013-01-01
Locked Nucleic Acids (LNAs) are RNA analogues with an O2′-C4′ methylene bridge which locks the sugar into a C3′-endo conformation. This enhances hybridization to DNA and RNA, making LNAs useful in microarrays and potential therapeutics. Here, the LNA, L(CAAU), provides a simplified benchmark for testing the ability of molecular dynamics (MD) to approximate nucleic acid properties. LNA χ torsions and partial charges were parametrized to create AMBER parm99_LNA. The revisions were tested by comparing MD predictions with AMBER parm99 and parm99_LNA against a 200 ms NOESY NMR spectrum of L(CAAU). NMR indicates an A-Form equilibrium ensemble. In 3000 ns simulations starting with an A-form structure, parm99_LNA and parm99 provide 66% and 35% agreement, respectively, with NMR NOE volumes and 3J-couplings. In simulations of L(CAAU) starting with all χ torsions in a syn conformation, only parm99_LNA is able to repair the structure. This implies methods for parametrizing force fields for nucleic acid mimics can reasonably approximate key interactions and that parm99_LNA will improve reliability of MD studies for systems with LNA. A method for approximating χ population distribution on the basis of base to sugar NOEs is also introduced. PMID:24377321
VISAR Analysis in the Frequency Domain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, D. H.; Specht, P.
2017-05-18
VISAR measurements are typically analyzed in the time domain, where velocity is approximately proportional to fringe shift. Moving to the frequency domain clarifies the limitations of this approximation and suggests several improvements. For example, optical dispersion preserves high-frequency information, so a zero-dispersion (air delay) interferometer does not provide optimal time resolution. Combined VISAR measurements can also improve time resolution. With adequate bandwidth and reasonable noise levels, it is quite possible to achieve better resolution than the VISAR approximation allows.
Extended abstract: Managing disjunction for practical temporal reasoning
NASA Technical Reports Server (NTRS)
Boddy, Mark; Schrag, Bob; Carciofini, Jim
1992-01-01
One of the problems that must be dealt with in either a formal or implemented temporal reasoning system is the ambiguity arising from uncertain information. Lack of precise information about when events happen leads to uncertainty regarding the effects of those events. Incomplete information and nonmonotonic inference lead to situations where there is more than one set of possible inferences, even when there is no temporal uncertainty at all. In an implemented system, this ambiguity is a computational problem as well as a semantic one. In this paper, we discuss some of the sources of this ambiguity, which we will treat as explicit disjunction, in the sense that ambiguous information can be interpreted as defining a set of possible inferences. We describe the application of three techniques for managing disjunction in an implementation of Dean's Time Map Manager. Briefly, the disjunction is either: removed by limiting the expressive power of the system, or approximated by a weaker form of representation that subsumes the disjunction. We use a combination of these methods to implement an expressive and efficient temporal reasoning engine that performs sound inference in accordance with a well-defined formal semantics.
Examining Targets for HIV Prevention: Intravaginal Practices in Urban Lusaka, Zambia
Chisembele, Maureen; Mumbi, Miriam; Malupande, Emeria; Jones, Deborah
2014-01-01
Abstract Intravaginal practices (IVP) are the introduction of products inside the vagina for hygienic, health, or sexuality reasons. The influence of men and Alengizis, traditional marriage counselors for girls, in promoting IVP has not been explored. We conducted gender-concordant focus groups and key informant interviews with Alengizis. The responses were conducted grouped into three themes: (1) cultural norms, (2) types and reasons for IVP, and (3) health consequences. We found that IVP were used by all participants in our sample and were taught from generation to generation by friends, relatives, or Alengizis. The reasons for women to engage in IVP were hygienic, though men expect women to engage in IVP to enhance sexual pleasure. Approximately 40% of women are aware that IVP can facilitate genital infections, but felt they would not feel clean discontinuing IVP. All men were unaware of the vaginal damage caused by IVP, and were concerned about the loss of sexual pleasure if women discontinued IVP. Despite the health risks of IVP, IVP continue to be widespread in Zambia and an integral component of hygiene and sexuality. The frequency of IVP mandates exploration into methods to decrease or ameliorate their use as an essential component of HIV prevention. PMID:24568672
Final Report of the Project "From the finite element method to the virtual element method"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manzini, Gianmarco; Gyrya, Vitaliy
The Finite Element Method (FEM) is a powerful numerical tool that is being used in a large number of engineering applications. The FEM is constructed on triangular/tetrahedral and quadrilateral/hexahedral meshes. Extending the FEM to general polygonal/polyhedral meshes in straightforward way turns out to be extremely difficult and leads to very complex and computationally expensive schemes. The reason for this failure is that the construction of the basis functions on elements with a very general shape is a non-trivial and complex task. In this project we developed a new family of numerical methods, dubbed the Virtual Element Method (VEM) for themore » numerical approximation of partial differential equations (PDE) of elliptic type suitable to polygonal and polyhedral unstructured meshes. We successfully formulated, implemented and tested these methods and studied both theoretically and numerically their stability, robustness and accuracy for diffusion problems, convection-reaction-diffusion problems, the Stokes equations and the biharmonic equations.« less
SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z; Folkert, M; Wang, J
2016-06-15
Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less
Temporal Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. D.; Thomas, B. C.
2004-01-01
In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.
2008-01-01
A simple matrix polynomial approach is introduced for approximating unsteady aerodynamics in the s-plane and ultimately, after combining matrix polynomial coefficients with matrices defining the structure, a matrix polynomial of the flutter equations of motion (EOM) is formed. A technique of recasting the matrix-polynomial form of the flutter EOM into a first order form is also presented that can be used to determine the eigenvalues near the origin and everywhere on the complex plane. An aeroservoelastic (ASE) EOM have been generalized to include the gust terms on the right-hand side. The reasons for developing the new matrix polynomial approach are also presented, which are the following: first, the "workhorse" methods such as the NASTRAN flutter analysis lack the capability to consistently find roots near the origin, along the real axis or accurately find roots farther away from the imaginary axis of the complex plane; and, second, the existing s-plane methods, such as the Roger s s-plane approximation method as implemented in ISAC, do not always give suitable fits of some tabular data of the unsteady aerodynamics. A method available in MATLAB is introduced that will accurately fit generalized aerodynamic force (GAF) coefficients in a tabular data form into the coefficients of a matrix polynomial form. The root-locus results from the NASTRAN pknl flutter analysis, the ISAC-Roger's s-plane method and the present matrix polynomial method are presented and compared for accuracy and for the number and locations of roots.
NASA Astrophysics Data System (ADS)
Sahoo, B. K.; Singh, Yashpal
2017-06-01
The parity and time-reversal violating electric dipole moment (EDM) of 171Yb is calculated accounting for the electron-correlation effects over the Dirac-Hartree-Fock method in the relativistic Rayleigh-Schrödinger many-body perturbation theory, with the second- [MBPT(2) method] and third-order [MBPT(3) method] approximations, and two variants of all-order relativistic many-body approaches, in the random phase approximation (RPA) and coupled-cluster (CC) method with singles and doubles (CCSD method) framework. We consider electron-nucleus tensor-pseudotensor (T-PT) and nuclear Schiff moment (NSM) interactions as the predominant sources that induce EDM in a diamagnetic atomic system. Our results from the CCSD method to EDM (da) of 171Yb due to the T-PT and NSM interactions are found to be da=4.85 (6 ) ×10-20<σ > CT|e | cm and da=2.89 (4 ) ×10-17S /(|e |fm3) , respectively, where CT is the T-PT coupling constant and S is the NSM. These values differ significantly from the earlier calculations. The reason for the same has been attributed to large correlation effects arising through non-RPA type of interactions among the electrons in this atom that are observed by analyzing the differences in the RPA and CCSD results. This has been further scrutinized from the MBPT(2) and MBPT(3) results and their roles have been demonstrated explicitly.
Understanding why women seek abortions in the US
2013-01-01
Background The current political climate with regards to abortion in the US, along with the economic recession may be affecting women’s reasons for seeking abortion, warranting a new investigation into the reasons why women seek abortion. Methods Data for this study were drawn from baseline quantitative and qualitative data from the Turnaway Study, an ongoing, five-year, longitudinal study evaluating the health and socioeconomic consequences of receiving or being denied an abortion in the US. While the study has followed women for over two full years, it relies on the baseline data which were collected from 2008 through the end of 2010. The sample included 954 women from 30 abortion facilities across the US who responded to two open ended questions regarding the reasons why they wanted to terminate their pregnancy approximately one week after seeking an abortion. Results Women’s reasons for seeking an abortion fell into 11 broad themes. The predominant themes identified as reasons for seeking abortion included financial reasons (40%), timing (36%), partner related reasons (31%), and the need to focus on other children (29%). Most women reported multiple reasons for seeking an abortion crossing over several themes (64%). Using mixed effects multivariate logistic regression analyses, we identified the social and demographic predictors of the predominant themes women gave for seeking an abortion. Conclusions Study findings demonstrate that the reasons women seek abortion are complex and interrelated, similar to those found in previous studies. While some women stated only one factor that contributed to their desire to terminate their pregnancies, others pointed to a myriad of factors that, cumulatively, resulted in their seeking abortion. As indicated by the differences we observed among women’s reasons by individual characteristics, women seek abortion for reasons related to their circumstances, including their socioeconomic status, age, health, parity and marital status. It is important that policy makers consider women’s motivations for choosing abortion, as decisions to support or oppose such legislation could have profound effects on the health, socioeconomic outcomes and life trajectories of women facing unwanted pregnancies. PMID:23829590
Software life cycle methodologies and environments
NASA Technical Reports Server (NTRS)
Fridge, Ernest
1991-01-01
Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.
Susong, D.; Marks, D.; Garen, D.
1999-01-01
Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.
Gaussian representation of high-intensity focused ultrasound beams.
Soneson, Joshua E; Myers, Matthew R
2007-11-01
A method for fast numerical simulation of high-intensity focused ultrasound beams is derived. The method is based on the frequency-domain representation of the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and assumes for each harmonic a Gaussian transverse pressure distribution at all distances from the transducer face. The beamwidths of the harmonics are constrained to vary inversely with the square root of the harmonic number, and as such this method may be viewed as an extension of a quasilinear approximation. The technique is capable of determining pressure or intensity fields of moderately nonlinear high-intensity focused ultrasound beams in water or biological tissue, usually requiring less than a minute of computer time on a modern workstation. Moreover, this method is particularly well suited to high-gain simulations since, unlike traditional finite-difference methods, it is not subject to resolution limitations in the transverse direction. Results are shown to be in reasonable agreement with numerical solutions of the full KZK equation in both tissue and water for moderately nonlinear beams.
Müsken, Mathias; Di Fiore, Stefano; Römling, Ute; Häussler, Susanne
2010-08-01
A major reason for bacterial persistence during chronic infections is the survival of bacteria within biofilm structures, which protect cells from environmental stresses, host immune responses and antimicrobial therapy. Thus, there is concern that laboratory methods developed to measure the antibiotic susceptibility of planktonic bacteria may not be relevant to chronic biofilm infections, and it has been suggested that alternative methods should test antibiotic susceptibility within a biofilm. In this paper, we describe a fast and reliable protocol for using 96-well microtiter plates for the formation of Pseudomonas aeruginosa biofilms; the method is easily adaptable for antimicrobial susceptibility testing. This method is based on bacterial viability staining in combination with automated confocal laser scanning microscopy. The procedure simplifies qualitative and quantitative evaluation of biofilms and has proven to be effective for standardized determination of antibiotic efficiency on P. aeruginosa biofilms. The protocol can be performed within approximately 60 h.
Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji
2016-01-01
This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins. Guest Editors: J.C. Gumbart and Sergei Noskov. PMID:26766517
[Increasing Number of Road Traffic Fatalities in Germany - Turnaround or Snap-Shot].
Brand, S; Schmucker, U; Lob, G; Haasper, C; Juhra, C; Hell, W; Rieth, P; Matthes, G
2017-04-01
Introduction: For the first time since 20 years, the number of road accident fatalities in 2011 increased on German roads compared to earlier periods. Methods and Results: The presented paper submitted by the expert group for accident prevention investigates and discusses possible reasons for the observed increase in road traffic fatalities. Results: Climate changes as well as changes in economic environment, and technological progress in car and passenger safety are identified as possible reasons for the observed increase. Discussion: Mentioning the "Decade of Action for Road Safety" initiated by the UNO and coordinated by the WHO, the overall goal is a worldwide reduction of accident related road fatalities. But prognostic calculations predict an asymptotic approximation to a limit of road fatalities. To achieve a reduction by half until 2020 intense collaboration and disproportional expenditure are necessary. Conclusion: From the authors' point of view the current increase of traffic fatalities in Germany is rated as a snapshot rather than a turnaround. Georg Thieme Verlag KG Stuttgart · New York.
Synchrony in the onset of mental-state reasoning: evidence from five cultures.
Callaghan, Tara; Rochat, Philippe; Lillard, Angeline; Claux, Mary Louise; Odden, Hal; Itakura, Shoji; Tapanya, Sombat; Singh, Saraswati
2005-05-01
Over the past 20 years, developmental psychologists have shown considerable interest in the onset of a theory of mind, typically marked by children's ability to pass false-belief tasks. In Western cultures, children pass such tasks around the age of 5 years, with variations of the tasks producing small changes in the age at which they are passed. Knowing whether this age of transition is common across diverse cultures is important to understanding what causes this development. Cross-cultural studies have produced mixed findings, possibly because of varying methods used in different cultures. The present study used a single procedure to measure false-belief understanding in five cultures: Canada, India, Peru, Samoa, and Thailand. With a standardized procedure, we found synchrony in the onset of mentalistic reasoning, with children crossing the false-belief milestone at approximately 5 years of age in every culture studied. The meaning of this synchrony for the origins of mental-state understanding is discussed.
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
NASA Astrophysics Data System (ADS)
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
Learning and tuning fuzzy logic controllers through reinforcements.
Berenji, H R; Khedkar, P
1992-01-01
A method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. It is shown that: the generalized approximate-reasoning-based intelligent control (GARIC) architecture learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
Dynamic Modeling and Testing of MSRR-1 for Use in Microgravity Environments Analysis
NASA Technical Reports Server (NTRS)
Gattis, Christy; LaVerde, Bruce; Howell, Mike; Phelps, Lisa H. (Technical Monitor)
2001-01-01
Delicate microgravity science is unlikely to succeed on the International Space Station if vibratory and transient disturbers corrupt the environment. An analytical approach to compute the on-orbit acceleration environment at science experiment locations within a standard payload rack resulting from these disturbers is presented. This approach has been grounded by correlation and comparison to test verified transfer functions. The method combines the results of finite element and statistical energy analysis using tested damping and modal characteristics to provide a reasonable approximation of the total root-mean-square (RMS) acceleration spectra at the interface to microgravity science experiment hardware.
Effect of lithological heterogeneity of bitumen sandstones on SAGD reservoir development
NASA Astrophysics Data System (ADS)
Korolev, E. A.; Usmanov, S. A.; Nikolaev, D. S.; Gabdelvaliyeva, R. R.
2018-05-01
The article describes the heavy oil field developed by the SAGD method. While development planning all the heterogeneity of the reservoir is must be taken into account. The objective of this work is to identify the distribution of lithological heterogeneities and their influence on oil production. For this reason, the studies of core samples were conducted and the heterogeneity was identified. Then properties and approximate geometry of lithological objects were studied. Also the effect of the heterogeneity on the heat propagation and production of fluid were analyzed. In the end, recommendations were made for the study of such heterogeneities on other deposits with similar geology
Flow-Tagging Velocimetry for Hypersonic Flows Using Fluorescence of Nitric Oxide
NASA Technical Reports Server (NTRS)
Danehy, P. M.; OByrne, S.; Houwing, A. F. P.
2001-01-01
We investigate a new type of flow-tagging velocimetry technique for hypersonic flows. The technique involves exciting a thin line of nitric oxide molecules with a laser beam and then, after some delay, acquiring an image of the displaced line. One component of velocity is determined from the time of flight. This method is applied to measure the velocity profile in a Mach 8.5 laminar, hypersonic boundary layer in the Australian National Universities T2 free-piston shock tunnel. The velocity is measured with an uncertainty of approximately 2%. Comparison with a CFD simulation of the flow shows reasonable agreement.
Restoration of longitudinal images.
Hu, Y; Frieden, B R
1988-01-15
In this paper, a method of restoring longitudinal images is developed. By using the transfer function for longitudinal objects, and inverse filtering, a longitudinal image may be restored. The Fourier theory and sampling theorems for transverse images cannot be used directly in the longitudinal case. A modification and reasonable approximation are introduced. We have numerically established a necessary relationship between just-resolved longitudinal separation (after inverse filtering), noise level, and the taking conditions of object distance and lens diameter. An empirical formula is also found to well-fit the computed results. This formula may be of use for designing optical systems which are to image longitudinal details, such as in robotics or microscopy.
Sponer, Jiří; Sponer, Judit E; Mládek, Arnošt; Jurečka, Petr; Banáš, Pavel; Otyepka, Michal
2013-12-01
Base stacking is a major interaction shaping up and stabilizing nucleic acids. During the last decades, base stacking has been extensively studied by experimental and theoretical methods. Advanced quantum-chemical calculations clarified that base stacking is a common interaction, which in the first approximation can be described as combination of the three most basic contributions to molecular interactions, namely, electrostatic interaction, London dispersion attraction and short-range repulsion. There is not any specific π-π energy term associated with the delocalized π electrons of the aromatic rings that cannot be described by the mentioned contributions. The base stacking can be rather reasonably approximated by simple molecular simulation methods based on well-calibrated common force fields although the force fields do not include nonadditivity of stacking, anisotropy of dispersion interactions, and some other effects. However, description of stacking association in condensed phase and understanding of the stacking role in biomolecules remain a difficult problem, as the net base stacking forces always act in a complex and context-specific environment. Moreover, the stacking forces are balanced with many other energy contributions. Differences in definition of stacking in experimental and theoretical studies are explained. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Garza, Alejandro J.; Bulik, Ireneusz W.; Alencar, Ana G. Sousa; Sun, Jianwei; Perdew, John P.; Scuseria, Gustavo E.
2016-04-01
Contrary to standard coupled cluster doubles (CCD) and Brueckner doubles (BD), singlet-paired analogues of CCD and BD (denoted here as CCD0 and BD0) do not break down when static correlation is present, but neglect substantial amounts of dynamic correlation. In fact, CCD0 and BD0 do not account for any contributions from multielectron excitations involving only same-spin electrons at all. We exploit this feature to add - without introducing double counting, self-interaction, or increase in cost - the missing correlation to these methods via meta-GGA (generalised gradient approximation) density functionals (Tao-Perdew-Staroverov-Scuseria and strongly constrained and appropriately normed). Furthermore, we improve upon these CCD0+DFT blends by invoking range separation: the short- and long-range correlations absent in CCD0/BD0 are evaluated with density functional theory and the direct random phase approximation, respectively. This corrects the description of long-range van der Waals forces. Comprehensive benchmarking shows that the combinations presented here are very accurate for weakly correlated systems, while also providing a reasonable description of strongly correlated problems without resorting to symmetry breaking.
Studies of porous anodic alumina using spin echo scattering angle measurement
NASA Astrophysics Data System (ADS)
Stonaha, Paul
The properties of a neutron make it a useful tool for use in scattering experiments. We have developed a method, dubbed SESAME, in which specially designed magnetic fields encode the scattering signal of a neutron beam into the beam's average Larmor phase. A geometry is presented that delivers the correct Larmor phase (to first order), and it is shown that reasonable variations of the geometry do not significantly affect the net Larmor phase. The solenoids are designed using an analytic approximation. Comparison of this approximate function with finite element calculations and Hall probe measurements confirm its validity, allowing for fast computation of the magnetic fields. The coils were built and tested in-house on the NBL-4 instrument, a polarized neutron reflectometer whose construction is another major portion of this work. Neutron scattering experiments using the solenoids are presented, and the scattering signal from porous anodic alumina is investigated in detail. A model using the Born Approximation is developed and compared against the scattering measurements. Using the model, we define the necessary degree of alignment of such samples in a SESAME measurement, and we show how the signal retrieved using SESAME is sensitive to range of detectable momentum transfer.
ERIC Educational Resources Information Center
Ting, Laura
2011-01-01
Limited research exists on social work students' level of depression and help-seeking beliefs. This study empirically examined the rates of depression among 215 BSW students and explored students' reasons for not using mental health services. Approximately 50% scored at or above the Center for Epidemiologic Studies Depression Scale cutoff;…
NASA Astrophysics Data System (ADS)
Tian, Xin; Li, Hua; Jiang, Xiaoyu; Xie, Jingping; Gore, John C.; Xu, Junzhong
2017-02-01
Two diffusion-based approaches, CG (constant gradient) and FEXI (filtered exchange imaging) methods, have been previously proposed for measuring transcytolemmal water exchange rate constant kin, but their accuracy and feasibility have not been comprehensively evaluated and compared. In this work, both computer simulations and cell experiments in vitro were performed to evaluate these two methods. Simulations were done with different cell diameters (5, 10, 20 μm), a broad range of kin values (0.02-30 s-1) and different SNR's, and simulated kin's were directly compared with the ground truth values. Human leukemia K562 cells were cultured and treated with saponin to selectively change cell transmembrane permeability. The agreement between measured kin's of both methods was also evaluated. The results suggest that, without noise, the CG method provides reasonably accurate estimation of kin especially when it is smaller than 10 s-1, which is in the typical physiological range of many biological tissues. However, although the FEXI method overestimates kin even with corrections for the effects of extracellular water fraction, it provides reasonable estimates with practical SNR's and more importantly, the fitted apparent exchange rate AXR showed approximately linear dependence on the ground truth kin. In conclusion, either CG or FEXI method provides a sensitive means to characterize the variations in transcytolemmal water exchange rate constant kin, although the accuracy and specificity is usually compromised. The non-imaging CG method provides more accurate estimation of kin, but limited to large volume-of-interest. Although the accuracy of FEXI is compromised with extracellular volume fraction, it is capable of spatially mapping kin in practice.
NASA Astrophysics Data System (ADS)
Yu, W.; Gao, C.-Z.; Zhang, Y.; Zhang, F. S.; Hutton, R.; Zou, Y.; Wei, B.
2018-03-01
We calculate electron capture and ionization cross sections of N2 impacted by the H+ projectile at keV energies. To this end, we employ the time-dependent density-functional theory coupled nonadiabatically to molecular dynamics. To avoid the explicit treatment of the complex density matrix in the calculation of cross sections, we propose an approximate method based on the assumption of constant ionization rate over the period of the projectile passing the absorbing boundary. Our results agree reasonably well with experimental data and semi-empirical results within the measurement uncertainties in the considered energy range. The discrepancies are mainly attributed to the inadequate description of exchange-correlation functional and the crude approximation for constant ionization rate. Although the present approach does not predict the experiments quantitatively for collision energies below 10 keV, it is still helpful to calculate total cross sections of ion-molecule collisions within a certain energy range.
NASA Astrophysics Data System (ADS)
Ouahrani, T.; Reshak, A. H.; de La Roza, A. Otero; Mebrouki, M.; Luaña, V.; Khenata, R.; Amrani, B.
2009-12-01
We report results from first-principles density functional calculations using the full-potential linear augmented plane wave (FP-LAPW) method. The generalized gradient approximation (GGA) and the Engel-Vosko-generalized gradient approximation (EV-GGA) were used for the exchange-correlation energy of the structural, electronic, linear and nonlinear optical properties of the chalcopyrite Ga2PSb compound. The valence band maximum (VBM) is located at the Γv point, and the conduction band minimum (CBM) is located at the Γc point, resulting in a direct band gap of about 0.365 eV for GGA and 0.83 eV for EV-GGA. In comparison with the experimental one (1.2 eV) we found that EV-GGA calculation gives energy gap in reasonable agreement with the experiment. The spin orbit coupling has marginal influence on the optical properties. The ground state quantities such as lattice parameters (a, c and u), bulk modules B and its pressure derivative B^primeare evaluated.
ElMasry, Gamal; Nakauchi, Shigeki
2016-03-01
A simulation method for approximating spectral signatures of minced meat samples was developed depending on concentrations and optical properties of the major chemical constituents. Minced beef samples of different compositions scanned on a near-infrared spectroscopy and on a hyperspectral imaging system were examined. Chemical composition determined heuristically and optical properties collected from authenticated references were simulated to approximate samples' spectral signatures. In short-wave infrared range, the resulting spectrum equals the sum of the absorption of three individual absorbers, that is, water, protein, and fat. By assuming homogeneous distributions of the main chromophores in the mince samples, the obtained absorption spectra are found to be a linear combination of the absorption spectra of the major chromophores present in the sample. Results revealed that developed models were good enough to derive spectral signatures of minced meat samples with a reasonable level of robustness of a high agreement index value more than 0.90 and ratio of performance to deviation more than 1.4.
Sancho-García, J C
2011-09-13
Highly accurate coupled-cluster (CC) calculations with large basis sets have been performed to study the binding energy of the (CH)12, (CH)16, (CH)20, and (CH)24 polyhedral hydrocarbons in two, cage-like and planar, forms. We also considered the effect of other minor contributions: core-correlation, relativistic corrections, and extrapolations to the limit of the full CC expansion. Thus, chemically accurate values could be obtained for these complicated systems. These nearly exact results are used to evaluate next the performance of main approximations (i.e., pure, hybrid, and double-hybrid methods) within density functional theory (DFT) in a systematic fashion. Some commonly used functionals, including the B3LYP model, are affected by large errors, and only those having reduced self-interaction error (SIE), which includes the last family of conjectured expressions (double hybrids), are able to achieve reasonable low deviations of 1-2 kcal/mol especially when an estimate for dispersion interactions is also added.
A numerical and experimental study on the nonlinear evolution of long-crested irregular waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goullet, Arnaud; Choi, Wooyoung; Division of Ocean Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 305-701
2011-01-15
The spatial evolution of nonlinear long-crested irregular waves characterized by the JONSWAP spectrum is studied numerically using a nonlinear wave model based on a pseudospectral (PS) method and the modified nonlinear Schroedinger (MNLS) equation. In addition, new laboratory experiments with two different spectral bandwidths are carried out and a number of wave probe measurements are made to validate these two wave models. Strongly nonlinear wave groups are observed experimentally and their propagation and interaction are studied in detail. For the comparison with experimental measurements, the two models need to be initialized with care and the initialization procedures are described. Themore » MNLS equation is found to approximate reasonably well for the wave fields with a relatively smaller Benjamin-Feir index, but the phase error increases as the propagation distance increases. The PS model with different orders of nonlinear approximation is solved numerically, and it is shown that the fifth-order model agrees well with our measurements prior to wave breaking for both spectral bandwidths.« less
Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.
Kärkkäinen, Salme; Lantuéjoul, Christian
2007-10-01
We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.
Spike solutions in Gierer#x2013;Meinhardt model with a time dependent anomaly exponent
NASA Astrophysics Data System (ADS)
Nec, Yana
2018-01-01
Experimental evidence of complex dispersion regimes in natural systems, where the growth of the mean square displacement in time cannot be characterised by a single power, has been accruing for the past two decades. In such processes the exponent γ(t) in ⟨r2⟩ ∼ tγ(t) at times might be approximated by a piecewise constant function, or it can be a continuous function. Variable order differential equations are an emerging mathematical tool with a strong potential to model these systems. However, variable order differential equations are not tractable by the classic differential equations theory. This contribution illustrates how a classic method can be adapted to gain insight into a system of this type. Herein a variable order Gierer-Meinhardt model is posed, a generic reaction- diffusion system of a chemical origin. With a fixed order this system possesses a solution in the form of a constellation of arbitrarily situated localised pulses, when the components' diffusivity ratio is asymptotically small. The pattern was shown to exist subject to multiple step-like transitions between normal diffusion and sub-diffusion, as well as between distinct sub-diffusive regimes. The analytical approximation obtained permits qualitative analysis of the impact thereof. Numerical solution for typical cross-over scenarios revealed such features as earlier equilibration and non-monotonic excursions before attainment of equilibrium. The method is general and allows for an approximate numerical solution with any reasonably behaved γ(t).
A full-wave Helmholtz model for continuous-wave ultrasound transmission.
Huttunen, Tomi; Malinen, Matti; Kaipio, Jari P; White, Phillip Jason; Hynynen, Kullervo
2005-03-01
A full-wave Helmholtz model of continuous-wave (CW) ultrasound fields may offer several attractive features over widely used partial-wave approximations. For example, many full-wave techniques can be easily adjusted for complex geometries, and multiple reflections of sound are automatically taken into account in the model. To date, however, the full-wave modeling of CW fields in general 3D geometries has been avoided due to the large computational cost associated with the numerical approximation of the Helmholtz equation. Recent developments in computing capacity together with improvements in finite element type modeling techniques are making possible wave simulations in 3D geometries which reach over tens of wavelengths. The aim of this study is to investigate the feasibility of a full-wave solution of the 3D Helmholtz equation for modeling of continuous-wave ultrasound fields in an inhomogeneous medium. The numerical approximation of the Helmholtz equation is computed using the ultraweak variational formulation (UWVF) method. In addition, an inverse problem technique is utilized to reconstruct the velocity distribution on the transducer which is used to model the sound source in the UWVF scheme. The modeling method is verified by comparing simulated and measured fields in the case of transmission of 531 kHz CW fields through layered plastic plates. The comparison shows a reasonable agreement between simulations and measurements at low angles of incidence but, due to mode conversion, the Helmholtz model becomes insufficient for simulating ultrasound fields in plates at large angles of incidence.
Williams, Rebecca J.; Tse, Tony; DiPiazza, Katelyn; Zarin, Deborah A.
2015-01-01
Background Clinical trials that end prematurely (or “terminate”) raise financial, ethical, and scientific concerns. The extent to which the results of such trials are disseminated and the reasons for termination have not been well characterized. Methods and Findings A cross-sectional, descriptive study of terminated clinical trials posted on the ClinicalTrials.gov results database as of February 2013 was conducted. The main outcomes were to characterize the availability of primary outcome data on ClinicalTrials.gov and in the published literature and to identify the reasons for trial termination. Approximately 12% of trials with results posted on the ClinicalTrials.gov results database (905/7,646) were terminated. Most trials were terminated for reasons other than accumulated data from the trial (68%; 619/905), with an insufficient rate of accrual being the lead reason for termination among these trials (57%; 350/619). Of the remaining trials, 21% (193/905) were terminated based on data from the trial (findings of efficacy or toxicity) and 10% (93/905) did not specify a reason. Overall, data for a primary outcome measure were available on ClinicalTrials.gov and in the published literature for 72% (648/905) and 22% (198/905) of trials, respectively. Primary outcome data were reported on the ClinicalTrials.gov results database and in the published literature more frequently (91% and 46%, respectively) when the decision to terminate was based on data from the trial. Conclusions Trials terminate for a variety of reasons, not all of which reflect failures in the process or an inability to achieve the intended goals. Primary outcome data were reported most often when termination was based on data from the trial. Further research is needed to identify best practices for disseminating the experience and data resulting from terminated trials in order to help ensure maximal societal benefit from the investments of trial participants and others involved with the study. PMID:26011295
FBC: a flat binary code scheme for fast Manhattan hash retrieval
NASA Astrophysics Data System (ADS)
Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun
2018-04-01
Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.
A simple orbit-attitude coupled modelling method for large solar power satellites
NASA Astrophysics Data System (ADS)
Li, Qingjun; Wang, Bo; Deng, Zichen; Ouyang, Huajiang; Wei, Yi
2018-04-01
A simple modelling method is proposed to study the orbit-attitude coupled dynamics of large solar power satellites based on natural coordinate formulation. The generalized coordinates are composed of Cartesian coordinates of two points and Cartesian components of two unitary vectors instead of Euler angles and angular velocities, which is the reason for its simplicity. Firstly, in order to develop natural coordinate formulation to take gravitational force and gravity gradient torque of a rigid body into account, Taylor series expansion is adopted to approximate the gravitational potential energy. The equations of motion are constructed through constrained Hamilton's equations. Then, an energy- and constraint-conserving algorithm is presented to solve the differential-algebraic equations. Finally, the proposed method is applied to simulate the orbit-attitude coupled dynamics and control of a large solar power satellite considering gravity gradient torque and solar radiation pressure. This method is also applicable to dynamic modelling of other rigid multibody aerospace systems.
Stability-Derivative Determination from Flight Data
NASA Technical Reports Server (NTRS)
Holowicz, Chester H.; Holleman, Euclid C.
1958-01-01
A comprehensive discussion of the various factors affecting the determination of stability and control derivatives from flight data is presented based on the experience of the NASA High-Speed Flight Station. Factors relating to test techniques, determination of mass characteristics, instrumentation, and methods of analysis are discussed. For most longitudinal-stability-derivative analyses simple equations utilizing period and damping have been found to be as satisfactory as more comprehensive methods. The graphical time-vector method has been the basis of lateral-derivative analysis, although simple approximate methods can be useful If applied with caution. Control effectiveness has been generally obtained by relating the peak acceleration to the rapid control input, and consideration must be given to aerodynamic contributions if reasonable accuracy is to be realized.. Because of the many factors involved In the determination of stability derivatives, It is believed that the primary stability and control derivatives are probably accurate to within 10 to 25 percent, depending upon the specific derivative. Static-stability derivatives at low angle of attack show the greatest accuracy.
Pos, Edwin; Guevara Andino, Juan Ernesto; Sabatier, Daniel; Molino, Jean-François; Pitman, Nigel; Mogollón, Hugo; Neill, David; Cerón, Carlos; Rivas-Torres, Gonzalo; Di Fiore, Anthony; Thomas, Raquel; Tirado, Milton; Young, Kenneth R; Wang, Ophelia; Sierra, Rodrigo; García-Villacorta, Roosevelt; Zagt, Roderick; Palacios Cuenca, Walter; Aulestia, Milton; Ter Steege, Hans
2017-06-01
With many sophisticated methods available for estimating migration, ecologists face the difficult decision of choosing for their specific line of work. Here we test and compare several methods, performing sanity and robustness tests, applying to large-scale data and discussing the results and interpretation. Five methods were selected to compare for their ability to estimate migration from spatially implicit and semi-explicit simulations based on three large-scale field datasets from South America (Guyana, Suriname, French Guiana and Ecuador). Space was incorporated semi-explicitly by a discrete probability mass function for local recruitment, migration from adjacent plots or from a metacommunity. Most methods were able to accurately estimate migration from spatially implicit simulations. For spatially semi-explicit simulations, estimation was shown to be the additive effect of migration from adjacent plots and the metacommunity. It was only accurate when migration from the metacommunity outweighed that of adjacent plots, discrimination, however, proved to be impossible. We show that migration should be considered more an approximation of the resemblance between communities and the summed regional species pool. Application of migration estimates to simulate field datasets did show reasonably good fits and indicated consistent differences between sets in comparison with earlier studies. We conclude that estimates of migration using these methods are more an approximation of the homogenization among local communities over time rather than a direct measurement of migration and hence have a direct relationship with beta diversity. As betadiversity is the result of many (non)-neutral processes, we have to admit that migration as estimated in a spatial explicit world encompasses not only direct migration but is an ecological aggregate of these processes. The parameter m of neutral models then appears more as an emerging property revealed by neutral theory instead of being an effective mechanistic parameter and spatially implicit models should be rejected as an approximation of forest dynamics.
Longmire-Avital, Buffie; Oberle, Virginia
2016-01-01
Condoms are considered a highly effective form of sexually transmitted infection prevention for heterosexual sex. Black American women (BAW) have been and are at elevated risk for heterosexual exposure to human immunodeficiency virus (HIV) because they have been and continue to be less likely to negotiate condom use with a partner that supports them financially. However, BAW who have made tremendous educational gains may still encounter challenges regarding the distribution of power that can affect condom use and negotiation. The purpose of this exploratory study was to examine the reasons that highly educated, emerging, adult BAW reported for using condoms. One hundred twenty-seven emerging adult BAW (ages 18-29 years) completed a mixed-methods online survey during the spring of 2013 (January-May). Approximately 80% of the women were in college or college graduates. They had a high rate of previous HIV testing (68.5%). Through the use of an interpretive paradigm and grounded theory, three themes emerged regarding the reasons that the participants in this sample used condoms as their primary form of protection: (1) the reliable "standard," (2) pregnancy prevention, and (3) cost effective and "easily accessible." Findings are discussed in terms of their public health significance for this seemingly lower-risk population.
Nozzle Free Jet Flows Within the Strong Curved Shock Regime
NASA Technical Reports Server (NTRS)
Shih, Tso-Shin
1975-01-01
A study based on inviscid analysis was conducted to examine the flow field produced from a convergent-divergent nozzle when a strong curved shock occurs. It was found that a certain constraint is imposed on the flow solution of the problem which is the unique feature of the flow within this flow regime, and provides the reason why the inverse method of calculation cannot be employed for these problems. An approximate method was developed to calculate the flow field, and results were obtained for two-dimensional flows. Analysis and calculations were performed for flows with axial symmetry. It is shown that under certain conditions, the vorticity generated at the jet boundary may become infinite and the viscous effect becomes important. Under other conditions, the asymptotic free jet height as well as the corresponding shock geometry were determined.
Longitudinal studies of botulinum toxin in cervical dystonia: Why do patients discontinue therapy?
Jinnah, H A; Comella, Cynthia L; Perlmutter, Joel; Lungu, Codrin; Hallett, Mark
2018-06-01
Numerous studies have established botulinum toxin (BoNT) to be safe and effective for the treatment of cervical dystonia (CD). Despite its well-documented efficacy, there has been growing awareness that a significant proportion of CD patients discontinue therapy. The reasons for discontinuation are only partly understood. This summary describes longitudinal studies that provided information regarding the proportions of patients discontinuing BoNT therapy, and the reasons for discontinuing therapy. The data come predominantly from un-blinded long-term follow-up studies, registry studies, and patient-based surveys. All types of longitudinal studies provide strong evidence that BoNT is both safe and effective in the treatment of CD for many years. Overall, approximately one third of CD patients discontinue BoNT. The most common reason for discontinuing therapy is lack of benefit, often described as primary or secondary non-response. The apparent lack of response is only rarely related to true immune-mediated resistance to BoNT. Other reasons for discontinuing include side effects, inconvenience, cost, or other reasons. Although BoNT is safe and effective in the treatment of the majority of patients with CD, approximately one third discontinue. The increasing awareness of a significant proportion of patients who discontinue should encourage further efforts to optimize administration of BoNT, to improve BoNT preparations to extend duration or reduce side effects, to develop add-on therapies that may mitigate swings in symptom severity, or develop entirely novel treatment approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharada, Shaama Mallikarjun; Bell, Alexis T., E-mail: mhg@bastille.cchem.berkeley.edu, E-mail: bell@cchem.berkeley.edu; Head-Gordon, Martin, E-mail: mhg@bastille.cchem.berkeley.edu, E-mail: bell@cchem.berkeley.edu
2014-04-28
The cost of calculating nuclear hessians, either analytically or by finite difference methods, during the course of quantum chemical analyses can be prohibitive for systems containing hundreds of atoms. In many applications, though, only a few eigenvalues and eigenvectors, and not the full hessian, are required. For instance, the lowest one or two eigenvalues of the full hessian are sufficient to characterize a stationary point as a minimum or a transition state (TS), respectively. We describe here a method that can eliminate the need for hessian calculations for both the characterization of stationary points as well as searches for saddlemore » points. A finite differences implementation of the Davidson method that uses only first derivatives of the energy to calculate the lowest eigenvalues and eigenvectors of the hessian is discussed. This method can be implemented in conjunction with geometry optimization methods such as partitioned-rational function optimization (P-RFO) to characterize stationary points on the potential energy surface. With equal ease, it can be combined with interpolation methods that determine TS guess structures, such as the freezing string method, to generate approximate hessian matrices in lieu of full hessians as input to P-RFO for TS optimization. This approach is shown to achieve significant cost savings relative to exact hessian calculation when applied to both stationary point characterization as well as TS optimization. The basic reason is that the present approach scales one power of system size lower since the rate of convergence is approximately independent of the size of the system. Therefore, the finite-difference Davidson method is a viable alternative to full hessian calculation for stationary point characterization and TS search particularly when analytical hessians are not available or require substantial computational effort.« less
An approach for spherical harmonic analysis of non-smooth data
NASA Astrophysics Data System (ADS)
Wang, Hansheng; Wu, Patrick; Wang, Zhiyong
2006-12-01
A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.
Application of two direct runoff prediction methods in Puerto Rico
Sepulveda, N.
1997-01-01
Two methods for predicting direct runoff from rainfall data were applied to several basins and the resulting hydrographs compared to measured values. The first method uses a geomorphology-based unit hydrograph to predict direct runoff through its convolution with the excess rainfall hyetograph. The second method shows how the resulting hydraulic routing flow equation from a kinematic wave approximation is solved using a spectral method based on the matrix representation of the spatial derivative with Chebyshev collocation and a fourth-order Runge-Kutta time discretization scheme. The calibrated Green-Ampt (GA) infiltration parameters are obtained by minimizing the sum, over several rainfall events, of absolute differences between the total excess rainfall volume computed from the GA equations and the total direct runoff volume computed from a hydrograph separation technique. The improvement made in predicting direct runoff using a geomorphology-based unit hydrograph with the ephemeral and perennial stream network instead of the strictly perennial stream network is negligible. The hydraulic routing scheme presented here is highly accurate in predicting the magnitude and time of the hydrograph peak although the much faster unit hydrograph method also yields reasonable results.
NASA Astrophysics Data System (ADS)
Hall, D. J.; Skottfelt, J.; Soman, M. R.; Bush, N.; Holland, A.
2017-12-01
Charge-Coupled Devices (CCDs) have been the detector of choice for imaging and spectroscopy in space missions for several decades, such as those being used for the Euclid VIS instrument and baselined for the SMILE SXI. Despite the many positive properties of CCDs, such as the high quantum efficiency and low noise, when used in a space environment the detectors suffer damage from the often-harsh radiation environment. High energy particles can create defects in the silicon lattice which act to trap the signal electrons being transferred through the device, reducing the signal measured and effectively increasing the noise. We can reduce the impact of radiation on the devices through four key methods: increased radiation shielding, device design considerations, optimisation of operating conditions, and image correction. Here, we concentrate on device design operations, investigating the impact of narrowing the charge-transfer channel in the device with the aim of minimising the impact of traps during readout. Previous studies for the Euclid VIS instrument considered two devices, the e2v CCD204 and CCD273, the serial register of the former having a 50 μm channel and the latter having a 20 μm channel. The reduction in channel width was previously modelled to give an approximate 1.6× reduction in charge storage volume, verified experimentally to have a reduction in charge transfer inefficiency of 1.7×. The methods used to simulate the reduction approximated the charge cloud to a sharp-edged volume within which the probability of capture by traps was 100%. For high signals and slow readout speeds, this is a reasonable approximation. However, for low signals and higher readout speeds, the approximation falls short. Here we discuss a new method of simulating and calculating charge storage variations with device design changes, considering the absolute probability of capture across the pixel, bringing validity to all signal sizes and readout speeds. Using this method, we can optimise the device design to suffer minimum impact from radiation damage effects, here using detector development for the SMILE mission to demonstrate the process.
26 CFR 1.412(c)(3)-1 - Reasonable funding methods.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 5 2010-04-01 2010-04-01 false Reasonable funding methods. 1.412(c)(3)-1... Reasonable funding methods. (a) Introduction—(1) In general. This section prescribes rules for determining whether or not, in the case of an ongoing plan, a funding method is reasonable for purposes of section 412...
26 CFR 1.412(c)(3)-1 - Reasonable funding methods.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 5 2012-04-01 2011-04-01 true Reasonable funding methods. 1.412(c)(3)-1 Section...(c)(3)-1 Reasonable funding methods. (a) Introduction—(1) In general. This section prescribes rules for determining whether or not, in the case of an ongoing plan, a funding method is reasonable for...
26 CFR 1.412(c)(3)-1 - Reasonable funding methods.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 5 2011-04-01 2011-04-01 false Reasonable funding methods. 1.412(c)(3)-1...(c)(3)-1 Reasonable funding methods. (a) Introduction—(1) In general. This section prescribes rules for determining whether or not, in the case of an ongoing plan, a funding method is reasonable for...
26 CFR 1.412(c)(3)-1 - Reasonable funding methods.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 5 2013-04-01 2013-04-01 false Reasonable funding methods. 1.412(c)(3)-1...(c)(3)-1 Reasonable funding methods. (a) Introduction—(1) In general. This section prescribes rules for determining whether or not, in the case of an ongoing plan, a funding method is reasonable for...
Parrish, Randall R; Horstwood, Matthew; Arnason, John G; Chenery, Simon; Brewer, Tim; Lloyd, Nicholas S; Carpenter, David O
2008-02-01
Inhaled depleted uranium (DU) aerosols are recognised as a distinct human health hazard and DU has been suggested to be responsible in part for illness in both military and civilian populations that may have been exposed. This study aimed to develop and use a testing procedure capable of detecting an individual's historic milligram-quantity aerosol exposure to DU up to 20 years after the event. This method was applied to individuals associated with or living proximal to a DU munitions plant in Colonie New York that were likely to have had a significant DU aerosol inhalation exposure, in order to improve DU-exposure screening reliability and gain insight into the residence time of DU in humans. We show using sensitive mass spectrometric techniques that when exposure to aerosol has been unambiguous and in sufficient quantity, urinary excretion of DU can be detected more than 20 years after primary DU inhalation contamination ceased, even when DU constitutes only approximately 1% of the total excreted uranium. It seems reasonable to conclude that a chronically DU-exposed population exists within the contamination 'footprint' of the munitions plant in Colonie, New York. The method allows even a modest DU exposure to be identified where other less sensitive methods would have failed entirely. This should allow better assessment of historical exposure incidence than currently exists.
NASA Astrophysics Data System (ADS)
Canhanga, Betuel; Ni, Ying; Rančić, Milica; Malyarenko, Anatoliy; Silvestrov, Sergei
2017-01-01
After Black-Scholes proposed a model for pricing European Options in 1973, Cox, Ross and Rubinstein in 1979, and Heston in 1993, showed that the constant volatility assumption made by Black-Scholes was one of the main reasons for the model to be unable to capture some market details. Instead of constant volatilities, they introduced stochastic volatilities to the asset dynamic modeling. In 2009, Christoffersen empirically showed "why multifactor stochastic volatility models work so well". Four years later, Chiarella and Ziveyi solved the model proposed by Christoffersen. They considered an underlying asset whose price is governed by two factor stochastic volatilities of mean reversion type. Applying Fourier transforms, Laplace transforms and the method of characteristics they presented a semi-analytical formula to compute an approximate price for American options. The huge calculation involved in the Chiarella and Ziveyi approach motivated the authors of this paper in 2014 to investigate another methodology to compute European Option prices on a Christoffersen type model. Using the first and second order asymptotic expansion method we presented a closed form solution for European option, and provided experimental and numerical studies on investigating the accuracy of the approximation formulae given by the first order asymptotic expansion. In the present paper we will perform experimental and numerical studies for the second order asymptotic expansion and compare the obtained results with results presented by Chiarella and Ziveyi.
Softcopy quality ruler method: implementation and validation
NASA Astrophysics Data System (ADS)
Jin, Elaine W.; Keelan, Brian W.; Chen, Junqing; Phillips, Jonathan B.; Chen, Ying
2009-01-01
A softcopy quality ruler method was implemented for the International Imaging Industry Association (I3A) Camera Phone Image Quality (CPIQ) Initiative. This work extends ISO 20462 Part 3 by virtue of creating reference digital images of known subjective image quality, complimenting the hardcopy Standard Reference Stimuli (SRS). The softcopy ruler method was developed using images from a Canon EOS 1Ds Mark II D-SLR digital still camera (DSC) and a Kodak P880 point-and-shoot DSC. Images were viewed on an Apple 30in Cinema Display at a viewing distance of 34 inches. Ruler images were made for 16 scenes. Thirty ruler images were generated for each scene, representing ISO 20462 Standard Quality Scale (SQS) values of approximately 2 to 31 at an increment of one just noticeable difference (JND) by adjusting the system modulation transfer function (MTF). A Matlab GUI was developed to display the ruler and test images side-by-side with a user-adjustable ruler level controlled by a slider. A validation study was performed at Kodak, Vista Point Technology, and Aptina Imaging in which all three companies set up a similar viewing lab to run the softcopy ruler method. The results show that the three sets of data are in reasonable agreement with each other, with the differences within the range expected from observer variability. Compared to previous implementations of the quality ruler, the slider-based user interface allows approximately 2x faster assessments with 21.6% better precision.
Truth-Valued-Flow Inference (TVFI) and its applications in approximate reasoning
NASA Technical Reports Server (NTRS)
Wang, Pei-Zhuang; Zhang, Hongmin; Xu, Wei
1993-01-01
The framework of the theory of Truth-valued-flow Inference (TVFI) is introduced. Even though there are dozens of papers presented on fuzzy reasoning, we think it is still needed to explore a rather unified fuzzy reasoning theory which has the following two features: (1) it is simplified enough to be executed feasibly and easily; and (2) it is well structural and well consistent enough that it can be built into a strict mathematical theory and is consistent with the theory proposed by L.A. Zadeh. TVFI is one of the fuzzy reasoning theories that satisfies the above two features. It presents inference by the form of networks, and naturally views inference as a process of truth values flowing among propositions.
Heat and mass transfer in flames
NASA Technical Reports Server (NTRS)
Faeth, G. M.
1986-01-01
Heat- and mass-transfer processes in turbulent diffusion flames are discussed, considering turbulent mixing and the structure of single-phase flames, drop processes in spray flames, and nonluminous and luminous flame radiation. Interactions between turbulence and other phenomena are emphasized, concentrating on past work of the author and his associates. The conserved-scalar formalism, along with the laminar-flamelet approximation, is shown to provide reasonable estimates of the structure of gas flames, with modest levels of empiricism. Extending this approach to spray flames has highlighted the importance of drop/turbulence interactions; e.g., turbulent dispersion of drops, modification of turbulence by drops, etc. Stochastic methods being developed to treat these phenomena are yielding encouraging results.
Origin of spin reorientation transitions in antiferromagnetic MnPt-based alloys
NASA Astrophysics Data System (ADS)
Chang, P.-H.; Zhuravlev, I. A.; Belashchenko, K. D.
2018-04-01
Antiferromagnetic MnPt exhibits a spin reorientation transition (SRT) as a function of temperature, and off-stoichiometric Mn-Pt alloys also display SRTs as a function of concentration. The magnetocrystalline anisotropy in these alloys is studied using first-principles calculations based on the coherent potential approximation and the disordered local moment method. The anisotropy is fairly small and sensitive to the variations in composition and temperature due to the cancellation of large contributions from different parts of the Brillouin zone. Concentration and temperature-driven SRTs are found in reasonable agreement with experimental data. Contributions from specific band-structure features are identified and used to explain the origin of the SRTs.
Homicide-suicide in Victoria, Australia.
Milroy, C M; Dratsas, M; Ranson, D L
1997-12-01
Thirty-nine incidents of homicide-suicide occurring in Victoria, Australia between 1985 and 1989 were examined. In 33 cases the assailants were men. The victims were spouses or women living in a de facto marriage. The majority of the victims were shot, and this was also the most frequent method of suicide. Breakdown in a relationship was the most frequent reason for killing. Mental illness of the assailant accounted for the killing in approximately 20% of cases. Physical ill health and financial stress were identified as important associative factors, particularly in the elderly. The pattern of homicide-suicide in Victoria is similar to that observed in other jurisdictions and represents an important and distinct subgroup of homicide.
Digital processing of satellite imagery application to jungle areas of Peru
NASA Technical Reports Server (NTRS)
Pomalaza, J. C. (Principal Investigator); Pomalaza, C. A.; Espinoza, J.
1976-01-01
The author has identified the following significant results. The use of clustering methods permits the development of relatively fast classification algorithms that could be implemented in an inexpensive computer system with limited amount of memory. Analysis of CCTs using these techniques can provide a great deal of detail permitting the use of the maximum resolution of LANDSAT imagery. Potential cases were detected in which the use of other techniques for classification using a Gaussian approximation for the distribution functions can be used with advantage. For jungle areas, channels 5 and 7 can provide enough information to delineate drainage patterns, swamp and wet areas, and make a reasonable broad classification of forest types.
A machine independent expert system for diagnosing environmentally induced spacecraft anomalies
NASA Technical Reports Server (NTRS)
Rolincik, Mark J.
1991-01-01
A new rule-based, machine independent analytical tool for diagnosing spacecraft anomalies, the EnviroNET expert system, was developed. Expert systems provide an effective method for storing knowledge, allow computers to sift through large amounts of data pinpointing significant parts, and most importantly, use heuristics in addition to algorithms which allow approximate reasoning and inference, and the ability to attack problems not rigidly defines. The EviroNET expert system knowledge base currently contains over two hundred rules, and links to databases which include past environmental data, satellite data, and previous known anomalies. The environmental causes considered are bulk charging, single event upsets (SEU), surface charging, and total radiation dose.
Observations and analysis of self-similar branching topology in glacier networks
Bahr, D.B.; Peckham, S.D.
1996-01-01
Glaciers, like rivers, have a branching structure which can be characterized by topological trees or networks. Probability distributions of various topological quantities in the networks are shown to satisfy the criterion for self-similarity, a symmetry structure which might be used to simplify future models of glacier dynamics. Two analytical methods of describing river networks, Shreve's random topology model and deterministic self-similar trees, are applied to the six glaciers of south central Alaska studied in this analysis. Self-similar trees capture the topological behavior observed for all of the glaciers, and most of the networks are also reasonably approximated by Shreve's theory. Copyright 1996 by the American Geophysical Union.
Experimental validation of a quasi-steady theory for the flow through the glottis
NASA Astrophysics Data System (ADS)
Vilain, C. E.; Pelorson, X.; Fraysse, C.; Deverge, M.; Hirschberg, A.; Willems, J.
2004-09-01
In this paper a theoretical description of the flow through the glottis based on a quasi-steady boundary layer theory is presented. The Thwaites method is used to solve the von Kármán equations within the boundary layers. In practice this makes the theory much easier to use compared to Pohlhausen's polynomial approximations. This theoretical description is evaluated on the basis of systematic comparison with experimental data obtained under steady flow or unsteady (oscillating) flow without and with moving vocal folds. Results tend to show that the theory reasonably explains the measured data except when unsteady or viscous terms become predominant. This happens particularly during the collision of the vocal folds.
Second-order Born calculation of coplanar symmetric (e, 2e) process on Mg
NASA Astrophysics Data System (ADS)
Zhang, Yong-Zhi; Wang, Yang; Zhou, Ya-Jun
2014-06-01
The second-order distorted wave Born approximation (DWBA) method is employed to investigate the triple differential cross sections (TDCS) of coplanar doubly symmetric (e, 2e) collisions for magnesium at excess energies of 6 eV-20 eV. Comparing with the standard first-order DWBA calculations, the inclusion of the second-order Born term in the scattering amplitude improves the degree of agreement with experiments, especially for backward scattering region of TDCS. This indicates that the present second-order Born term is capable to give a reasonable correction to DWBA model in studying coplanar symmetric (e, 2e) problems of two-valence-electron target in low energy range.
Gas Evolution Dynamics in Godunov-Type Schemes and Analysis of Numerical Shock Instability
NASA Technical Reports Server (NTRS)
Xu, Kun
1999-01-01
In this paper we are going to study the gas evolution dynamics of the exact and approximate Riemann solvers, e.g., the Flux Vector Splitting (FVS) and the Flux Difference Splitting (FDS) schemes. Since the FVS scheme and the Kinetic Flux Vector Splitting (KFVS) scheme have the same physical mechanism and similar flux function, based on the analysis of the discretized KFVS scheme the weakness and advantage of the FVS scheme are closely observed. The subtle dissipative mechanism of the Godunov method in the 2D case is also analyzed, and the physical reason for shock instability, i.e., carbuncle phenomena and odd-even decoupling, is presented.
NASA Astrophysics Data System (ADS)
Orlova, A. G.; Kirillin, M. Yu.; Volovetsky, A. B.; Shilyagina, N. Yu.; Sergeeva, E. A.; Golubiatnikov, G. Yu.; Turchin, I. V.
2017-07-01
Using diffuse optical spectroscopy the level of oxygenation and hemoglobin concentration in experimental tumor in comparison with normal muscle tissue of mice have been studied. Subcutaneously growing SKBR-3 was used as a tumor model. Continuous wave fiber probe diffuse optical spectroscopy system was employed. Optical properties extraction approach was based on diffusion approximation. Decreased blood oxygen saturation level and increased total hemoglobin content were demonstrated in the neoplasm. The main reason of such differences between tumor and norm was significant elevation of deoxyhemoglobin concentration in SKBR-3. The method can be useful for diagnosis of tumors as well as for study of blood flow parameters of tumor models with different angiogenic properties.
Vaping Topography and Reasons of Use among Adults in Klang Valley, Malaysia
Abidin, Najihah Zainol; Abidin, Emilia Zainal; Zulkifli, Aziemah; Ismail, Sharifah Norkhadijah Syed; Karuppiah, Karmegam; Nordin, Amer Siddiq Amer; Musbah, Zuraidah; Zulkipli, Nur Fadhilah; Praveena, Sarva Mangala; Rasdi, Irniza; Rahman, Anita Abd
2018-01-01
Background: Consistency and accuracy of results in assessing health risks due to vaping or e-cigarette use are difficult to achieve without established consumption data. The present report covers baseline data on vaping topography and reasons for use among local users in Klang Valley, Malaysia. Methods: An 80-item survey regarding socio-demographic characteristics, smoking topography and reasons for e-cigarette use was employed to assess e-cigarette users recruited from several public universities and private organisations. The survey questionnaire was self-administered. Data were analysed using statistical software. Results: Eighty-six current e-cigarette users participated with more than half (51.2%) of them aged ≥ 25 years old. Significant proportions of the sample were single (51.2%), had a tertiary education level (63.5%) and a household income of less than USD1000 per month (65.2%). Median duration of e-cigarette use was less than a year; users drew approximately 50 puffs per day and refilled twice a day. The majority (74%) used e-liquids containing nicotine with a concentration of 6 μg/mL. Daily users spent USD18-23 per month. Reasons for using the e-cigarette included enjoyment of the products (85.9%), perception of lower toxicity than tobacco (87%), and the fact that it was a cheaper smoking alternative (61%). Conclusion: The data on e-cigarette smoking topography obtained in this study are novel. The reasons of usage were mainly users’ enjoyment of e-cigarettes, preparation for quitting smoking, perception of low toxicity and a healthier smoking substitute and cheapness in the long run. The results establish basic knowledge for the local vaping topography and reference material for future e-cigarette-related research. PMID:29480664
New active substances authorized in the United Kingdom between 1972 and 1994
Jefferys, David B; Leakey, Diane; Lewis, John A; Payne, Sandra; Rawlins, Michael D
1998-01-01
Aims The study was undertaken to assemble a list of all new active medicinal substances authorised in the United Kingdom between 1972 and 1994; to assess whether the pattern of introductions had changed; and to examine withdrawal rates and the reasons for withdrawal. Methods The identities of those new active substances whose manufacturers had obtained Product Licences between 1972 and 1994 were sought from the Medicines Control Agency's product data-base. For each substance relevant information was retrieved including the year of granting the Product Licence, its therapeutic class, whether currently authorised (and, if not, reason for withdrawal), and its nature (chemical, biological etc.). Results The Medicines Control Agency's data-base was cross-checked against two other data-bases for completeness. A total of 583 new active substances (in 579 products) were found to have been authorised over the study period. The annual rates of authorisation varied widely (9 to 40 per year). Whilst there was no evidence for any overall change in the annual rates of authorising new chemical entities, there has been a trend for increasing numbers of new products of biological origin to be authorised in recent years. Fifty-nine of the 583 new active substances have been withdrawn (1 each for quality and efficacy, 22 for safety, and 35 for commercial reasons). Conclusions For reasons that are unclear there is marked heterogeneity in the annual rates of authorisation of new active substances. Their 10 year survival is approximately 88% with withdrawals being, predominantly, for commercial or safety reasons. This confirms the provisional nature of assessments about safety at the time when a new active substance is introduced into routine clinical practice, and emphasises the importance of pharmacovigilance. PMID:9491828
Vaping Topography and Reasons of Use among Adults in Klang Valley, Malaysia
Zainol Abidin, Najihah; Abidin, Emilia Zainal; Zulkifli, Aziemah; Syed Ismail, Sharifah Norkhadijah; Karuppiah, Karmegam; Amer Nordin, Amer Siddiq; Musbah, Zuraidah; Zulkipli, Nur Fadhilah; Praveena, Sarva Mangala; Rasdi, Irniza; Abd Rahman, Anita
2018-02-26
Background: Consistency and accuracy of results in assessing health risks due to vaping or e-cigarette use are difficult to achieve without established consumption data. The present report covers baseline data on vaping topography and reasons for use among local users in Klang Valley, Malaysia. Methods: An 80-item survey regarding socio-demographic characteristics, smoking topography and reasons for e-cigarette use was employed to assess e-cigarette users recruited from several public universities and private organisations. The survey questionnaire was self-administered. Data were analysed using statistical software. Results: Eighty-six current e-cigarette users participated with more than half (51.2%) of them aged ≥ 25 years old. Significant proportions of the sample were single (51.2%), had a tertiary education level (63.5%) and a household income of less than USD1000 per month (65.2%). Median duration of e-cigarette use was less than a year; users drew approximately 50 puffs per day and refilled twice a day. The majority (74%) used e-liquids containing nicotine with a concentration of 6 μg/mL. Daily users spent USD18-23 per month. Reasons for using the e-cigarette included enjoyment of the products (85.9%), perception of lower toxicity than tobacco (87%), and the fact that it was a cheaper smoking alternative (61%). Conclusion: The data on e-cigarette smoking topography obtained in this study are novel. The reasons of usage were mainly users’ enjoyment of e-cigarettes, preparation for quitting smoking, perception of low toxicity and a healthier smoking substitute and cheapness in the long run. The results establish basic knowledge for the local vaping topography and reference material for future e-cigarette-related research. Creative Commons Attribution License
29 CFR 778.217 - Reimbursement for expenses.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... (2) The actual or reasonably approximate amount expended by an employee in purchasing, laundering or... expenses, such as taxicab fares, incurred while traveling on the employer's business. (4) “Supper money”, a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaefer, Bastian; Goedecker, Stefan, E-mail: stefan.goedecker@unibas.ch
2016-07-21
An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This methodmore » allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.« less
Probability Elicitation Under Severe Time Pressure: A Rank-Based Method.
Jaspersen, Johannes G; Montibeller, Gilberto
2015-07-01
Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio-scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low-probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real-world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats. © 2015 Society for Risk Analysis.
Approximate reasoning-based learning and control for proximity operations and docking in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Jani, Yashvant; Lea, Robert N.
1991-01-01
A recently proposed hybrid-neutral-network and fuzzy-logic-control architecture is applied to a fuzzy logic controller developed for attitude control of the Space Shuttle. A model using reinforcement learning and learning from past experience for fine-tuning its knowledge base is proposed. Two main components of this approximate reasoning-based intelligent control (ARIC) model - an action-state evaluation network and action selection network are described as well as the Space Shuttle attitude controller. An ARIC model for the controller is presented, and it is noted that the input layer in each network includes three nodes representing the angle error, angle error rate, and bias node. Preliminary results indicate that the controller can hold the pitch rate within its desired deadband and starts to use the jets at about 500 sec in the run.
Cadeddu, Maria P.; Marchand, Roger; Orlandi, Emiliano; ...
2017-08-11
Satellite and ground-based microwave radiometers are routinely used for the retrieval of liquid water path (LWP) under all atmospheric conditions. The retrieval of water vapor and LWP from ground-based radiometers during rain has proved to be a difficult challenge for two principal reasons: the inadequacy of the nonscattering approximation in precipitating clouds and the deposition of rain drops on the instrument's radome. In this paper, we combine model computations and real ground-based, zenith-viewing passive microwave radiometer brightness temperature measurements to investigate how total, cloud, and rain LWP retrievals are affected by assumptions on the cloud drop size distribution (DSD) andmore » under which conditions a nonscattering approximation can be considered reasonably accurate. Results show that until the drop effective diameter is larger than similar to 200 mu m, a nonscattering approximation yields results that are still accurate at frequencies less than 90 GHz. For larger drop sizes, it is shown that higher microwave frequencies contain useful information that can be used to separate cloud and rain LWP provided that the vertical distribution of hydrometeors, as well as the DSD, is reasonably known. The choice of the DSD parameters becomes important to ensure retrievals that are consistent with the measurements. A physical retrieval is tested on a synthetic data set and is then used to retrieve total, cloud, and rain LWP from radiometric measurements during two drizzling cases at the atmospheric radiation measurement Eastern North Atlantic site.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cadeddu, Maria P.; Marchand, Roger; Orlandi, Emiliano
Satellite and ground-based microwave radiometers are routinely used for the retrieval of liquid water path (LWP) under all atmospheric conditions. The retrieval of water vapor and LWP from ground-based radiometers during rain has proved to be a difficult challenge for two principal reasons: the inadequacy of the nonscattering approximation in precipitating clouds and the deposition of rain drops on the instrument's radome. In this paper, we combine model computations and real ground-based, zenith-viewing passive microwave radiometer brightness temperature measurements to investigate how total, cloud, and rain LWP retrievals are affected by assumptions on the cloud drop size distribution (DSD) andmore » under which conditions a nonscattering approximation can be considered reasonably accurate. Results show that until the drop effective diameter is larger than similar to 200 mu m, a nonscattering approximation yields results that are still accurate at frequencies less than 90 GHz. For larger drop sizes, it is shown that higher microwave frequencies contain useful information that can be used to separate cloud and rain LWP provided that the vertical distribution of hydrometeors, as well as the DSD, is reasonably known. The choice of the DSD parameters becomes important to ensure retrievals that are consistent with the measurements. A physical retrieval is tested on a synthetic data set and is then used to retrieve total, cloud, and rain LWP from radiometric measurements during two drizzling cases at the atmospheric radiation measurement Eastern North Atlantic site.« less
Code of Federal Regulations, 2010 CFR
2010-04-01
... to reasonable funding methods. 1.412(c)(3)-2 Section 1.412(c)(3)-2 Internal Revenue INTERNAL REVENUE... reasonable funding methods. (a) Introduction. This section prescribes effective dates for rules relating to reasonable funding methods, under section 412(c)(3) and § 1.412(c)(3)-1. Also, this section sets forth rules...
Code of Federal Regulations, 2013 CFR
2013-04-01
... to reasonable funding methods. 1.412(c)(3)-2 Section 1.412(c)(3)-2 Internal Revenue INTERNAL REVENUE... to reasonable funding methods. (a) Introduction. This section prescribes effective dates for rules relating to reasonable funding methods, under section 412(c)(3) and § 1.412(c)(3)-1. Also, this section...
Code of Federal Regulations, 2011 CFR
2011-04-01
... to reasonable funding methods. 1.412(c)(3)-2 Section 1.412(c)(3)-2 Internal Revenue INTERNAL REVENUE... to reasonable funding methods. (a) Introduction. This section prescribes effective dates for rules relating to reasonable funding methods, under section 412(c)(3) and § 1.412(c)(3)-1. Also, this section...
Code of Federal Regulations, 2014 CFR
2014-04-01
... to reasonable funding methods. 1.412(c)(3)-2 Section 1.412(c)(3)-2 Internal Revenue INTERNAL REVENUE... to reasonable funding methods. (a) Introduction. This section prescribes effective dates for rules relating to reasonable funding methods, under section 412(c)(3) and § 1.412(c)(3)-1. Also, this section...
Code of Federal Regulations, 2012 CFR
2012-04-01
... to reasonable funding methods. 1.412(c)(3)-2 Section 1.412(c)(3)-2 Internal Revenue INTERNAL REVENUE... to reasonable funding methods. (a) Introduction. This section prescribes effective dates for rules relating to reasonable funding methods, under section 412(c)(3) and § 1.412(c)(3)-1. Also, this section...
Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji
2016-07-01
This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Stochastic reconstructions of spectral functions: Application to lattice QCD
NASA Astrophysics Data System (ADS)
Ding, H.-T.; Kaczmarek, O.; Mukherjee, Swagato; Ohno, H.; Shu, H.-T.
2018-05-01
We present a detailed study of the applications of two stochastic approaches, stochastic optimization method (SOM) and stochastic analytical inference (SAI), to extract spectral functions from Euclidean correlation functions. SOM has the advantage that it does not require prior information. On the other hand, SAI is a more generalized method based on Bayesian inference. Under mean field approximation SAI reduces to the often-used maximum entropy method (MEM) and for a specific choice of the prior SAI becomes equivalent to SOM. To test the applicability of these two stochastic methods to lattice QCD, firstly, we apply these methods to various reasonably chosen model correlation functions and present detailed comparisons of the reconstructed spectral functions obtained from SOM, SAI and MEM. Next, we present similar studies for charmonia correlation functions obtained from lattice QCD computations using clover-improved Wilson fermions on large, fine, isotropic lattices at 0.75 and 1.5 Tc, Tc being the deconfinement transition temperature of a pure gluon plasma. We find that SAI and SOM give consistent results to MEM at these two temperatures.
Mocz, G.
1995-01-01
Fuzzy cluster analysis has been applied to the 20 amino acids by using 65 physicochemical properties as a basis for classification. The clustering products, the fuzzy sets (i.e., classical sets with associated membership functions), have provided a new measure of amino acid similarities for use in protein folding studies. This work demonstrates that fuzzy sets of simple molecular attributes, when assigned to amino acid residues in a protein's sequence, can predict the secondary structure of the sequence with reasonable accuracy. An approach is presented for discriminating standard folding states, using near-optimum information splitting in half-overlapping segments of the sequence of assigned membership functions. The method is applied to a nonredundant set of 252 proteins and yields approximately 73% matching for correctly predicted and correctly rejected residues with approximately 60% overall success rate for the correctly recognized ones in three folding states: alpha-helix, beta-strand, and coil. The most useful attributes for discriminating these states appear to be related to size, polarity, and thermodynamic factors. Van der Waals volume, apparent average thickness of surrounding molecular free volume, and a measure of dimensionless surface electron density can explain approximately 95% of prediction results. hydrogen bonding and hydrophobicity induces do not yet enable clear clustering and prediction. PMID:7549882
1994-09-30
equation due to Kadomtsev & Petviashvili (1970), Dx(atu + 6 ui)u + a8 3U) + 3 ay2u = 0, (KP) is known to describe approximately the evolution of...to be stable to perturbations, and their amplitudes need not be small. The Kadomtsev - Petviashvili (KP) equation is known to describe approximately the...predicted with reasonable accuracy by a family of exact solutions of an equation due to Kadomtsev and Petviashvili (1970): (ft + 6 ffx + f )x + 3fyy
Development of Parameters for the Collection and Analysis of Lidar at Military Munitions Sites
2010-01-01
and inertial measurement unit (IMU) equipment is used to locate the sensor in the air . The time of return of the laser signal allows for the...approximately 15 centimeters (cm) on soft ground surfaces and a horizontal accuracy of approximately 60 cm, both compared to surveyed control points...provide more accurate topographic data than other sources, at a reasonable cost compared to alternatives such as ground survey or photogrammetry
A Bayesian approach to modeling 2D gravity data using polygon states
NASA Astrophysics Data System (ADS)
Titus, W. J.; Titus, S.; Davis, J. R.
2015-12-01
We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.
Fuzziness In Approximate And Common-Sense Reasoning In Knowledge-Based Robotics Systems
NASA Astrophysics Data System (ADS)
Dodds, David R.
1987-10-01
Fuzzy functions, a major key to inexact reasoning, are described as they are applied to the fuzzification of robot co-ordinate systems. Linguistic-variables, a means of labelling ranges in fuzzy sets, are used as computationally pragmatic means of representing spatialization metaphors, themselves an extraordinarily rich basis for understanding concepts in orientational terms. Complex plans may be abstracted and simplified in a system which promotes conceptual planning by means of the orientational representation.
Guzman Castillo, Maria; Gillespie, Duncan O. S.; Allen, Kirk; Bandosz, Piotr; Schmid, Volker; Capewell, Simon; O’Flaherty, Martin
2014-01-01
Background Coronary Heart Disease (CHD) remains a major cause of mortality in the United Kingdom. Yet predictions of future CHD mortality are potentially problematic due to population ageing and increase in obesity and diabetes. Here we explore future projections of CHD mortality in England & Wales under two contrasting future trend assumptions. Methods In scenario A, we used the conventional counterfactual scenario that the last-observed CHD mortality rates from 2011 would persist unchanged to 2030. The future number of deaths was calculated by applying those rates to the 2012–2030 population estimates. In scenario B, we assumed that the recent falling trend in CHD mortality rates would continue. Using Lee-Carter and Bayesian Age Period Cohort (BAPC) models, we projected the linear trends up to 2030. We validate our methods using past data to predict mortality from 2002–2011. Then, we computed the error between observed and projected values. Results In scenario A, assuming that 2011 mortality rates stayed constant by 2030, the number of CHD deaths would increase 62% or approximately 39,600 additional deaths. In scenario B, assuming recent declines continued, the BAPC model (the model with lowest error) suggests the number of deaths will decrease by 56%, representing approximately 36,200 fewer deaths by 2030. Conclusions The decline in CHD mortality has been reasonably continuous since 1979, and there is little reason to believe it will soon halt. The commonly used assumption that mortality will remain constant from 2011 therefore appears slightly dubious. By contrast, using the BAPC model and assuming continuing mortality falls offers a more plausible prediction of future trends. Thus, despite population ageing, the number of CHD deaths might halve again between 2011 and 2030. This has implications for how the potential benefits of future cardiovascular strategies might best be calculated and presented. PMID:24918442
Buckling Of Shells Of Revolution /BOSOR/ with various wall constructions
NASA Technical Reports Server (NTRS)
Almroth, B. O.; Bushnell, D.; Sobel, L. H.
1969-01-01
Computer program, using numerical integration and finite difference techniques, solves almost any buckling problem for shells exhibiting orthotropic behavior. Stability analyses can be performed with reasonable accuracy and without unduly restrictive approximations.
NASA Astrophysics Data System (ADS)
Gallup, G. A.; Gerratt, J.
1985-09-01
The van der Waals energy between the two parts of a system is a very small fraction of the total electronic energy. In such cases, calculations have been based on perturbation theory. However, such an approach involves certain difficulties. For this reason, van der Waals energies have also been directly calculated from total energies. But such a method has definite limitations as to the size of systems which can be treated, and recently ab initio calculations have been combined with damped semiempirical long-range dispersion potentials to treat larger systems. In this procedure, large basis set superposition errors occur, which must be removed by the counterpoise method. The present investigation is concerned with an approach which is intermediate between the previously considered procedures. The first step in the new approach involves a variational calculation based upon valence bond functions. The procedure includes also the optimization of excited orbitals, and an approximation of atomic integrals and Hamiltonian matrix elements.
Learning and tuning fuzzy logic controllers through reinforcements
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap
1992-01-01
A new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. In particular, our Generalized Approximate Reasoning-based Intelligent Control (GARIC) architecture: (1) learns and tunes a fuzzy logic controller even when only weak reinforcements, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto, Sutton, and Anderson to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and has demonstrated significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu
2014-08-21
A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less
Decay of Far-Flowfield in Trailing Vortices
NASA Technical Reports Server (NTRS)
Baldwin, B. S.; Chigier, N. A.; Sheaffer, Y. S.
1973-01-01
Methods for reduction of velocities in trailing vortices of large aircraft are of current interest for the purpose of shortening the waiting time between landings at central airports. We have made finite-difference calculations of the flow in turbulent wake vortices as an aid to interpretation of wind-tunnel and flight experiments directed toward that end. Finite-difference solutions are capable of adding flexibility to such investigations if they are based on an adequate model of turbulence. Interesting developments have been taking place in the knowledge of turbulence that may lead to a complete theory in the future. In the meantime, approximate methods that yield reasonable agreement with experiment are appropriate. The simplified turbulence model we have selected contains features that account for the major effects disclosed by more sophisticated models in which the parameters are not yet established. Several puzzles are thereby resolved that arose in previous theoretical investigations of wake vortices.
PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.
Xia, Jing; Wang, Michelle Yongmei
Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.
Computation of turbulent flow in a thin liquid layer of fluid involving a hydraulic jump
NASA Technical Reports Server (NTRS)
Rahman, M. M.; Faghri, A.; Hankey, W. L.
1991-01-01
Numerically computed flow fields and free surface height distributions are presented for the flow of a thin layer of liquid adjacent to a solid horizontal surface that encounters a hydraulic jump. Two kinds of flow configurations are considered: two-dimensional plane flow and axisymmetric radial flow. The computations used a boundary-fitted moving grid method with a k-epsilon model for the closure of turbulence. The free surface height was determined by an optimization procedure which minimized the error in the pressure distribution on the free surface. It was also checked against an approximate procedure involving integration of the governing equations and use of the MacCormack predictor-corrector method. The computed film height also compared reasonably well with previous experiments. A region of recirculating flow was found to be present adjacent to the solid boundary near the location of the jump, which was caused by a rapid deceleration of the flow.
Designing the optimal shutter sequences for the flutter shutter imaging method
NASA Astrophysics Data System (ADS)
Jelinek, Jan
2010-04-01
Acquiring iris or face images of moving subjects at larger distances using a flash to prevent the motion blur quickly runs into eye safety concerns as the acquisition distance is increased. For that reason the flutter shutter method recently proposed by Raskar et al.has generated considerable interest in the biometrics community. The paper concerns the design of shutter sequences that produce the best images. The number of possible sequences grows exponentially in both the subject' s motion velocity and desired exposure value, with their majority being useless. Because the exact solution leads to an intractable mixed integer programming problem, we propose an approximate solution based on pre - screening the sequences according to the distribution of roots in their Fourier transform. A very fast algorithm utilizing the Jury' s criterion allows the testing to be done without explicitly computing the roots, making the approach practical for moderately long sequences.
NASA Astrophysics Data System (ADS)
Kitagawa, Yuya; Akinaga, Yoshinobu; Kawashima, Yukio; Jung, Jaewoon; Ten-no, Seiichiro
2012-06-01
A QM/MM (quantum-mechanical/molecular-mechanical) molecular-dynamics approach based on the generalized hybrid-orbital (GHO) method, in conjunction with the second-order perturbation (MP2) theory and the second-order approximate coupled-cluster (CC2) model, is employed to calculate electronic property accounting for a protein environment. Circular dichroism (CD) spectra originating from chiral disulfide bridges of oxytocin and insulin at room temperature are computed. It is shown that the sampling of thermal fluctuation of molecular geometries facilitated by the GHO-MD method plays an important role in the obtained spectra. It is demonstrated that, while the protein environments in an oxytocin molecule have significant electrostatic influence on its chiral center, it is compensated by solvent induced charges. This gives a reasonable explanation to experimental observations. GHO-MD simulations starting from different experimental structures of insulin indicate that existence of the disulfide bridges with negative dihedral angles is crucial.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Liang, E-mail: lfang@suda.edu.cn, E-mail: dawei.cao@tu-ilmenau.de; Nan, Feng; Yang, Ying
2016-02-29
BiVO{sub 4} photonic crystal inverse opals (io-BiVO{sub 4}) with highly dispersed Ag nanoparticles (NPs) were prepared by the nanosphere lithography method combining the pulsed current deposition method. The incorporation of the Ag NPs can significantly improve the photoelectrochemical and photocatalytic activity of BiVO{sub 4} inverse opals in the visible light region. The photocurrent density of the Ag/io-BiVO{sub 4} sample is 4.7 times higher than that of the disordered sample without the Ag NPs, while the enhancement factor of the corresponding kinetic constant in photocatalytic experiment is approximately 3. The improved photoelectrochemical and photocatalytic activity is benefited from two reasons: onemore » is the enhanced light harvesting owing to the coupling between the slow light and localized surface plasmon resonance effect; the other is the efficient separation of charge carriers due to the Schottky barriers.« less
Approximate Model of Zone Sedimentation
NASA Astrophysics Data System (ADS)
Dzianik, František
2011-12-01
The process of zone sedimentation is affected by many factors that are not possible to express analytically. For this reason, the zone settling is evaluated in practice experimentally or by application of an empirical mathematical description of the process. The paper presents the development of approximate model of zone settling, i.e. the general function which should properly approximate the behaviour of the settling process within its entire range and at the various conditions. Furthermore, the specification of the model parameters by the regression analysis of settling test results is shown. The suitability of the model is reviewed by graphical dependencies and by statistical coefficients of correlation. The approximate model could by also useful on the simplification of process design of continual settling tanks and thickeners.
Galerkin approximation for inverse problems for nonautonomous nonlinear distributed systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1988-01-01
An abstract framework and convergence theory is developed for Galerkin approximation for inverse problems involving the identification of nonautonomous nonlinear distributed parameter systems. A set of relatively easily verified conditions is provided which are sufficient to guarantee the existence of optimal solutions and their approximation by a sequence of solutions to a sequence of approximating finite dimensional identification problems. The approach is based on the theory of monotone operators in Banach spaces and is applicable to a reasonably broad class of nonlinear distributed systems. Operator theoretic and variational techniques are used to establish a fundamental convergence result. An example involving evolution systems with dynamics described by nonstationary quasilinear elliptic operators along with some applications are presented and discussed.
Multicriteria Personnel Selection by the Modified Fuzzy VIKOR Method
Alguliyev, Rasim M.; Aliguliyev, Ramiz M.; Mahmudova, Rasmiyya S.
2015-01-01
Personnel evaluation is an important process in human resource management. The multicriteria nature and the presence of both qualitative and quantitative factors make it considerably more complex. In this study, a fuzzy hybrid multicriteria decision-making (MCDM) model is proposed to personnel evaluation. This model solves personnel evaluation problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to evaluate the suitability of personnel and the approximate reasoning of linguistic values. For evaluation, we have selected five information culture criteria. The weights of the criteria were calculated using worst-case method. After that, modified fuzzy VIKOR is proposed to rank the alternatives. The outcome of this research is ranking and selecting best alternative with the help of fuzzy VIKOR and modified fuzzy VIKOR techniques. A comparative analysis of results by fuzzy VIKOR and modified fuzzy VIKOR methods is presented. Experiments showed that the proposed modified fuzzy VIKOR method has some advantages over fuzzy VIKOR method. Firstly, from a computational complexity point of view, the presented model is effective. Secondly, compared to fuzzy VIKOR method, it has high acceptable advantage compared to fuzzy VIKOR method. PMID:26516634
NASA Astrophysics Data System (ADS)
Ye, Weiming; Li, Pengfei; Huang, Xuhui; Xia, Qinzhi; Mi, Yuanyuan; Chen, Runsheng; Hu, Gang
2010-10-01
Exploring the principle and relationship of gene transcriptional regulations (TR) has been becoming a generally researched issue. So far, two major mathematical methods, ordinary differential equation (ODE) method and Boolean map (BM) method have been widely used for these purposes. It is commonly believed that simplified BMs are reasonable approximations of more realistic ODEs, and both methods may reveal qualitatively the same essential features though the dynamical details of both systems may show some differences. In this Letter we exhaustively enumerated all the 3-gene networks and many autonomous randomly constructed TR networks with more genes by using both the ODE and BM methods. In comparison we found that both methods provide practically identical results in most of cases of steady solutions. However, to our great surprise, most of network structures showing periodic cycles with the BM method possess only stationary states in ODE descriptions. These observations strongly suggest that many periodic oscillations and other complicated oscillatory states revealed by the BM rule may be related to the computational errors of variable and time discretizations and rarely have correspondence in realistic biology transcriptional regulatory circuits.
Inoue, Kumiyo; Barratt, Alexandra; Richters, Juliet
2015-10-01
To examine the clinical and epidemiological literature addressing contraceptive method change or discontinuation and to assess whether the documented reasons reflected women's experiences. Major databases including Medline and PsycINFO were searched using keywords related to contraception and discontinuation, adherence and satisfaction, for articles published between January 2003 and February 2013. Studies in developed countries that focused on women of reproductive age and reasons for method change or discontinuation were included. Reasons reported were categorised and examined. A total of 123 papers were reviewed in detail. Medical terminology was generally used to describe reasons for method discontinuation. The top two reported reasons were bleeding and pregnancy, but there was a lack of consensus about the categorisation of reasons. Broad categories that were not self-explanatory were included in more than half of the papers, often without further explanation. Only 12 studies expanded on categories containing 'other', 'non-medical' or 'personal' reasons. Eight papers included categories that attributed discontinuation to the participant, such as 'dissatisfied with method'. Studies of reasons for discontinuation of contraceptives do not well describe women's specific reasons. Studies rely heavily on medical terms and often fail to document women's subjective experiences. Future studies should create an opportunity for women to articulate their non-medical reasons in their own words, including those related to their sexual lives. Furthermore, researchers should distinguish, if possible, between reasons for discontinuation of a method and reasons for ceasing participation in a research study. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Information Uncertainty to Compare Qualitative Reasoning Security Risk Assessment Results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavez, Gregory M; Key, Brian P; Zerkle, David K
2009-01-01
The security risk associated with malevolent acts such as those of terrorism are often void of the historical data required for a traditional PRA. Most information available to conduct security risk assessments for these malevolent acts is obtained from subject matter experts as subjective judgements. Qualitative reasoning approaches such as approximate reasoning and evidential reasoning are useful for modeling the predicted risk from information provided by subject matter experts. Absent from these approaches is a consistent means to compare the security risk assessment results. Associated with each predicted risk reasoning result is a quantifiable amount of information uncertainty which canmore » be measured and used to compare the results. This paper explores using entropy measures to quantify the information uncertainty associated with conflict and non-specificity in the predicted reasoning results. The measured quantities of conflict and non-specificity can ultimately be used to compare qualitative reasoning results which are important in triage studies and ultimately resource allocation. Straight forward extensions of previous entropy measures are presented here to quantify the non-specificity and conflict associated with security risk assessment results obtained from qualitative reasoning models.« less
76 FR 22724 - Notice of Public Meeting of the Carrizo Plain National Monument Advisory Council
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-22
... School, located approximately 2 miles northwest of Soda Lake Road on Highway 58. The meeting will begin... special assistance such as sign language interpretation or other reasonable accommodations should contact...
First-order shock acceleration in solar flares
NASA Technical Reports Server (NTRS)
Ellison, D. C.; Ramaty, R.
1985-01-01
The first order Fermi shock acceleration model is compared with specific observations where electron, proton, and alpha particle spectra are available. In all events, it is found that a single shock with a compression ratio as inferred from the low energy proton spectra can reasonably produce the full proton, electron, and alpha particle spectra. The model predicts that the acceleration time to a given energy will be approximately equal for electrons and protons and, for reasonable solar parameters, can be less than 1 sec to 100 MeV.
ERIC Educational Resources Information Center
Idaho State Dept. of Education, Boise. Div. of Vocational Education.
An Idaho task force of Hispanic Americans, industry representatives, and education leaders studied the reasons Hispanic students were not enrolling in and completing vocational education programs. The task force sponsored a series of community meetings to identify reasons and solutions. Approximately 40-60 parents, students, and other interested…
Sadybekov, Arman; Krylov, Anna I.
2017-07-07
A theoretical approach for calculating core-level states in condensed phase is presented. The approach is based on equation-of-motion coupled-cluster theory (EOMCC) and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we address poor convergence issues that are encountered for the core-level states and significantly reduce computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to reference systems, are reproduced reasonably well. By using different protonation formsmore » of solvated glycine as a benchmark system, we show that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy.« less
On the origin independence of the Verdet tensor†
NASA Astrophysics Data System (ADS)
Caputo, M. C.; Coriani, S.; Pelloni, S.; Lazzeretti, P.
2013-07-01
The condition for invariance under a translation of the coordinate system of the Verdet tensor and the Verdet constant, calculated via quantum chemical methods using gaugeless basis sets, is expressed by a vanishing sum rule involving a third-rank polar tensor. The sum rule is, in principle, satisfied only in the ideal case of optimal variational electronic wavefunctions. In general, it is not fulfilled in non-variational calculations and variational calculations allowing for the algebraic approximation, but it can be satisfied for reasons of molecular symmetry. Group-theoretical procedures have been used to determine (i) the total number of non-vanishing components and (ii) the unique components of both the polar tensor appearing in the sum rule and the axial Verdet tensor, for a series of symmetry groups. Test calculations at the random-phase approximation level of accuracy for water, hydrogen peroxide and ammonia molecules, using basis sets of increasing quality, show a smooth convergence to zero of the sum rule. Verdet tensor components calculated for the same molecules converge to limit values, estimated via large basis sets of gaugeless Gaussian functions and London orbitals.
Climate variability, animal reservoir and transmission of scrub typhus in Southern China
Li, Xiaoning; Ma, Yu; Tao, Xia; Wu, Xinwei
2017-01-01
Objectives We aimed to evaluate the relationships between climate variability, animal reservoirs and scrub typhus incidence in Southern China. Methods We obtained data on scrub typhus cases in Guangzhou every month from 2006 to 2014 from the Chinese communicable disease network. Time-series Poisson regression models and distributed lag nonlinear models (DLNM) were used to evaluate the relationship between risk factors and scrub typhus. Results Wavelet analysis found the incidence of scrub typhus cycled with a period of approximately 8–12 months and long-term trends with a period of approximately 24–36 months. The DLNM model shows that relative humidity, rainfall, DTR, MEI and rodent density were associated with the incidence of scrub typhus. Conclusions Our findings suggest that the incidence scrub typhus has two main temporal cycles. Determining the reason for this trend and how it can be used for disease control and prevention requires additional research. The transmission of scrub typhus is highly dependent on climate factors and rodent density, both of which should be considered in prevention and control strategies for scrub typhus. PMID:28273079
NASA Astrophysics Data System (ADS)
Badillo-Olvera, A.; Begovich, O.; Peréz-González, A.
2017-01-01
The present paper is motivated by the purpose of detection and isolation of a single leak considering the Fault Model Approach (FMA) focused on pipelines with changes in their geometry. These changes generate a different pressure drop that those produced by the friction, this phenomenon is a common scenario in real pipeline systems. The problem arises, since the dynamical model of the fluid in a pipeline only considers straight geometries without fittings. In order to address this situation, several papers work with a virtual model of a pipeline that generates a equivalent straight length, thus, friction produced by the fittings is taking into account. However, when this method is applied, the leak is isolated in a virtual length, which for practical reasons does not represent a complete solution. This research proposes as a solution to the problem of leak isolation in a virtual length, the use of a polynomial interpolation function in order to approximate the conversion of the virtual position to a real-coordinates value. Experimental results in a real prototype are shown, concluding that the proposed methodology has a good performance.
Efficient Posterior Probability Mapping Using Savage-Dickey Ratios
Penny, William D.; Ridgway, Gerard R.
2013-01-01
Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640
NASA Astrophysics Data System (ADS)
Peters, L.; Şaşıoǧlu, E.; Mertig, I.; Katsnelson, M. I.
2018-01-01
By means of ab initio calculations in conjunction with the random-phase approximation (RPA) within the full-potential linearized augmented plane wave method, we study the screening of the Coulomb interaction in NbxCo (1 ≤x ≤9 ) clusters. In addition, these results are compared with pure bcc Nb bulk. We find that for all clusters the on-site Coulomb interaction in RPA is strongly screened, whereas the intersite nonlocal Coulomb interaction is weakly screened and for some clusters it is unscreened or even antiscreened. This is in strong contrast with pure Nb bulk, where the intersite Coulomb interaction is almost completely screened. Furthermore, constrained RPA calculations reveal that the contribution of the Co 3 d → 3 d channel to the total screening of the Co 3 d electrons is small. Moreover, we find that both the on-site and intersite Coulomb interaction parameters decrease in a reasonable approximation linearly with the cluster size and for clusters having more than 20 Nb atoms a transition from 0D to 3D screening is expected to take place.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadybekov, Arman; Krylov, Anna I.
A theoretical approach for calculating core-level states in condensed phase is presented. The approach is based on equation-of-motion coupled-cluster theory (EOMCC) and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we address poor convergence issues that are encountered for the core-level states and significantly reduce computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to reference systems, are reproduced reasonably well. By using different protonation formsmore » of solvated glycine as a benchmark system, we show that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy.« less
Approximate thermochemical tables for some C-H and C-H-O species
NASA Technical Reports Server (NTRS)
Bahn, G. S.
1973-01-01
Approximate thermochemical tables are presented for some C-H and C-H-O species and for some ionized species, supplementing the JANAF Thermochemical Tables for application to finite-chemical-kinetics calculations. The approximate tables were prepared by interpolation and extrapolation of limited available data, especially by interpolations over chemical families of species. Original estimations have been smoothed by use of a modification for the CDC-6600 computer of the Lewis Research Center PACl Program which was originally prepared for the IBM-7094 computer Summary graphs for various families show reasonably consistent curvefit values, anchored by properties of existing species in the JANAF tables.
Novelo-Casanova, D. A.; Lee, W.H.K.
1991-01-01
Using simulated coda waves, the resolution of the single-scattering model to extract coda Q (Qc) and its power law frequency dependence was tested. The back-scattering model of Aki and Chouet (1975) and the single isotropic-scattering model of Sato (1977) were examined. The results indicate that: (1) The input Qc models are reasonably well approximated by the two methods; (2) almost equal Qc values are recovered when the techniques sample the same coda windows; (3) low Qc models are well estimated in the frequency domain from the early and late part of the coda; and (4) models with high Qc values are more accurately extracted from late code measurements. ?? 1991 Birkha??user Verlag.
NASA Astrophysics Data System (ADS)
Bertens, R. A.; Alice Collaboration
2017-11-01
Elliptic (v2) and higher harmonic (v3,v4) flow coefficients of π±, K±, p (p ‾), and the ϕ-meson, measured in Pb-Pb collisions at the highest-ever center-of-mass energy of √{sNN} = 5.02 TeV, are presented. The results were obtained with the scalar product method, correlating hadrons with reference particles from a different η region. The vn exhibit a clear mass ordering for pT ≲ 2 GeV/c and only approximate particle type scaling for pT ≳ 2 GeV/c. Reasonable agreement with hydrodynamic calculations (IP-Glasma+MUSIC+UrQMD) is seen at pT ≲ 1 GeV/c.
A screening tool for delineating subregions of steady recharge within groundwater models
Dickinson, Jesse; Ferré, T.P.A.; Bakker, Mark; Crompton, Becky
2014-01-01
We have developed a screening method for simplifying groundwater models by delineating areas within the domain that can be represented using steady-state groundwater recharge. The screening method is based on an analytical solution for the damping of sinusoidal infiltration variations in homogeneous soils in the vadose zone. The damping depth is defined as the depth at which the flux variation damps to 5% of the variation at the land surface. Groundwater recharge may be considered steady where the damping depth is above the depth of the water table. The analytical solution approximates the vadose zone diffusivity as constant, and we evaluated when this approximation is reasonable. We evaluated the analytical solution through comparison of the damping depth computed by the analytic solution with the damping depth simulated by a numerical model that allows variable diffusivity. This comparison showed that the screening method conservatively identifies areas of steady recharge and is more accurate when water content and diffusivity are nearly constant. Nomograms of the damping factor (the ratio of the flux amplitude at any depth to the amplitude at the land surface) and the damping depth were constructed for clay and sand for periodic variations between 1 and 365 d and flux means and amplitudes from nearly 0 to 1 × 10−3 m d−1. We applied the screening tool to Central Valley, California, to identify areas of steady recharge. A MATLAB script was developed to compute the damping factor for any soil and any sinusoidal flux variation.
Direct application of Padé approximant for solving nonlinear differential equations.
Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario
2014-01-01
This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.
[Venous thromboembolic risk during repatriation for medical reasons].
Stansal, A; Perrier, E; Coste, S; Bisconte, S; Manen, O; Lazareth, I; Conard, J; Priollet, P
2015-12-01
In France, approximately 3000 people are repatriated every year, either in a civil situation by insurers. Repatriation also concerns French army soldiers. The literature is scarce on the topic of venous thromboembolic risk and its prevention during repatriation for medical reasons, a common situation. Most studies have focused on the association between venous thrombosis and travel, a relationship recognized more than 60 years ago but still subject to debate. Examining the degree of venous thromboembolic risk during repatriation for medical reasons must take into account several parameters, related to the patient, to comorbid conditions and to repatriation modalities. Appropriate prevention must be determined on an individual basis. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
Survey of HEPA filter applications and experience at Department of Energy sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbaugh, E.H.
1981-11-01
Results indicated that approximately 58% of the filters surveyed were changed out in the 1977 to 1979 study period and some 18% of all filters were changed out more than once. Most changeouts (60%) were due to the existence of a high pressure drop across the filter, indicative of filter plugging. The next most recurrent reasons for changeout and their percentage changeouts were leak test failure (15%) and preventive maintenance service life limit (12%). An average filter service life was calculated to be 3.0 years with a 2.0-year standard deviation. The labor required for filter changeout was calculated as 1.5more » manhours per filter changed. Filter failures occurred with approximately 12% of all installed filters. Most failures (60%) occurred for unknown reasons and handling or installation damage accounted for an additional 20% of all failures. Media ruptures, filter frame failures and seal failures occurred with approximately equal frequency at 5 to 6% each. Subjective responses to the questionnaire indicate problems are: need for improved acid and moisture resistant filters; filters more readily disposable as radioactive waste; improved personnel training in filter handling and installation; and need for pretreatment of air prior to HEPA filtration.« less
NASA Astrophysics Data System (ADS)
Bi, Lei; Yang, Ping
2016-07-01
The accuracy of the physical-geometric optics (PG-O) approximation is examined for the simulation of electromagnetic scattering by nonspherical dielectric particles. This study seeks a better understanding of the tunneling effect on the phase matrix by employing the invariant imbedding method to rigorously compute the zeroth-order Debye series, from which the tunneling efficiency and the phase matrix corresponding to the diffraction and external reflection are obtained. The tunneling efficiency is shown to be a factor quantifying the relative importance of the tunneling effect over the Fraunhofer diffraction near the forward scattering direction. Due to the tunneling effect, different geometries with the same projected cross section might have different diffraction patterns, which are traditionally assumed to be identical according to the Babinet principle. For particles with a fixed orientation, the PG-O approximation yields the external reflection pattern with reasonable accuracy, but ordinarily fails to predict the locations of peaks and minima in the diffraction pattern. The larger the tunneling efficiency, the worse the PG-O accuracy is at scattering angles less than 90°. If the particles are assumed to be randomly oriented, the PG-O approximation yields the phase matrix close to the rigorous counterpart, primarily due to error cancellations in the orientation-average process. Furthermore, the PG-O approximation based on an electric field volume-integral equation is shown to usually be much more accurate than the Kirchhoff surface integral equation at side-scattering angles, particularly when the modulus of the complex refractive index is close to unity. Finally, tunneling efficiencies are tabulated for representative faceted particles.
Ran, Congcong; Chen, Dan; Xu, Meng; Du, Chaohui; Li, Qinglian; Jiang, Ye
2016-08-15
To examine how methods affect the evaluation of entrapment efficiency (EE) of liposomes, four different sample pretreatment methods were adopted in the experiment. The four sample pretreatment methods were size-exclusion chromatography (SEC), solid-phase extraction (SPE), centrifugation ultrafiltration (CF-UF) and hollow fiber centrifugal ultrafiltration (HF-CF-UF). Amphotericin B (AmB), which could self-associate to form aggregates in water is adopted as the model drugs in this paper. In the present work, it was found that the characterization results of four methods were quite different. The EE of liposome by SEC was about 93%, only 5-13% using C18 or HLB columns, and approximately 100% by CF-UF. The EE of HF-CF-UF reached up to nearly 99.0%. Further, this paper revealed the reasons making the difference of EE among four methods. Conventional SEC may distort the authentic of EE of liposomes with mainly employing some small liposomes or excessive water as eluent. For SPE, cholesterol on liposome surface could interact with the stationary phase making it hard to elute with water, and increase the risk of liposome leakage. While for CF-UF, concentration polarization was a main limitation hindering unentrapped drug to pass through membrane, making unentrapped drug undetectable in liposome. HF-CF-UF could truly reflect EE of liposomes with the concentration of unentrapped AmB lower than 25.0μg/mL. However, when the concentration was higher than 25.0μg/mL, AmB aggregates could be entrapped by hollow fiber. From the above analysis, this paper came to the conclusion that each method had its own feature in characterization. This study provided a reasonable guideline for choosing methods to character the EE of liposome. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Athy, Jeremy; Friedrich, Jeff; Delany, Eileen
2008-05-01
Egon Brunswik (1903 1955) first made an interesting distinction between perception and explicit reasoning, arguing that perception included quick estimates of an object’s size, nearly always resulting in good approximations in uncertain environments, whereas explicit reasoning, while better at achieving exact estimates, could often fail by wide margins. An experiment conducted by Brunswik to investigate these ideas was never published and the only available information is a figure of the results presented in a posthumous book in 1956. We replicated and extended his study to gain insight into the procedures Brunswik used in obtaining his results. Explicit reasoning resulted in fewer errors, yet more extreme ones than perception. Brunswik’s graphical analysis of the results led to different conclusions, however, than did a modern statistically-based analysis.
Ultrasonic Method for Deployment Mechanism Bolt Element Preload Verification
NASA Technical Reports Server (NTRS)
Johnson, Eric C.; Kim, Yong M.; Morris, Fred A.; Mitchell, Joel; Pan, Robert B.
2014-01-01
Deployment mechanisms play a pivotal role in mission success. These mechanisms often incorporate bolt elements for which a preload within a specified range is essential for proper operation. A common practice is to torque these bolt elements to a specified value during installation. The resulting preload, however, can vary significantly with applied torque for a number of reasons. The goal of this effort was to investigate ultrasonic methods as an alternative for bolt preload verification in such deployment mechanisms. A family of non-explosive release mechanisms widely used by satellite manufacturers was chosen for the work. A willing contractor permitted measurements on a sampling of bolt elements for these release mechanisms that were installed by a technician following a standard practice. A variation of approximately 50% (+/- 25%) in the resultant preloads was observed. An alternative ultrasonic method to set the preloads was then developed and calibration data was accumulated. The method was demonstrated on bolt elements installed in a fixture instrumented with a calibrated load cell and designed to mimic production practice. The ultrasonic method yielded results within +/- 3% of the load cell reading. The contractor has since adopted the alternative method for its future production. Introduction
Equal Plate Charges on Series Capacitors?
ERIC Educational Resources Information Center
Illman, B. L.; Carlson, G. T.
1994-01-01
Provides a line of reasoning in support of the contention that the equal charge proposition is at best an approximation. Shows how the assumption of equal plate charge on capacitors in series contradicts the conservative nature of the electric field. (ZWH)
DOT National Transportation Integrated Search
1979-01-01
Presented is a relatively simple empirical equation that reasonably approximates the relationship between mesoscale carbon monoxide (CO) concentrations, areal vehicular CO emission rates, and the meteorological factors of wind speed and mixing height...
Approximate Reasoning: Past, Present, Future
1990-06-27
This note presents a personal view of the state of the art in the representation and manipulation of imprecise and uncertain information by automated ... processing systems. To contrast their objectives and characteristics with the sound deductive procedures of classical logic, methodologies developed
Analysis of collapse in flattening a micro-grooved heat pipe by lateral compression
NASA Astrophysics Data System (ADS)
Li, Yong; He, Ting; Zeng, Zhixin
2012-11-01
The collapse of thin-walled micro-grooved heat pipes is a common phenomenon in the tube flattening process, which seriously influences the heat transfer performance and appearance of heat pipe. At present, there is no other better method to solve this problem. A new method by heating the heat pipe is proposed to eliminate the collapse during the flattening process. The effectiveness of the proposed method is investigated through a theoretical model, a finite element(FE) analysis, and experimental method. Firstly, A theoretical model based on a deformation model of six plastic hinges and the Antoine equation of the working fluid is established to analyze the collapse of thin walls at different temperatures. Then, the FE simulation and experiments of flattening process at different temperatures are carried out and compared with theoretical model. Finally, the FE model is followed to study the loads of the plates at different temperatures and heights of flattened heat pipes. The results of the theoretical model conform to those of the FE simulation and experiments in the flattened zone. The collapse occurs at room temperature. As the temperature increases, the collapse decreases and finally disappears at approximately 130 °C for various heights of flattened heat pipes. The loads of the moving plate increase as the temperature increases. Thus, the reasonable temperature for eliminating the collapse and reducing the load is approximately 130 °C. The advantage of the proposed method is that the collapse is reduced or eliminated by means of the thermal deformation characteristic of heat pipe itself instead of by external support. As a result, the heat transfer efficiency of heat pipe is raised.
26 CFR 1.162-28 - Allocation of costs to lobbying activities.
Code of Federal Regulations, 2010 CFR
2010-04-01
... lobbying activities and prescribes rules permitting a taxpayer to use a reasonable method to allocate those... method of allocating costs—(1) In general. A taxpayer must use a reasonable method to allocate the costs described in paragraph (c) of this section to lobbying activities. A method is not reasonable unless it is...
Testing approximate theories of first-order phase transitions on the two-dimensional Potts model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, C.; Pandit, R.
The two-dimensional, q-state (q > 4) Potts model is used as a testing ground for approximate theories of first-order phase transitions. In particular, the predictions of a theory analogous to the Ramakrishnan-Yussouff theory of freezing are compared with those of ordinary mean-field (Curie-Wiess) theory. It is found that the Curie-Weiss theory is a better approximation than the Ramakrishnan-Yussouff theory, even though the former neglects all fluctuations. It is shown that the Ramakrishnan-Yussouff theory overestimates the effects of fluctuations in this system. The reasons behind the failure of the Ramakrishnan-Yussouff approximation and the suitability of using the two-dimensional Potts model asmore » a testing ground for these theories are discussed.« less
Shelef, M
1994-05-23
In 1970, before the implementation of strict controls on emissions in motor vehicle exhaust gas (MVEG), the annual USA incidence of fatal accidents by carbon monoxide in the MVEG was approximately 800 and that of suicides approximately 2000 (somewhat less than 10% of total suicides). In 1987, there were approximately 400 fatal accidents and approximately 2700 suicides by MVEG. Accounting for the growth in population and vehicle registration, the yearly lives saved in accidents by MVEG were approximately 1200 in 1987 and avoided suicides approximately 1400. The decrease in accidents continues unabated while the decrease in expected suicides by MVEG reached a plateau in 1981-1983. The reasons for this disparity are discussed. Juxtaposition of these results with the projected cancer risk avoidance of less than 500 annually in 2005 (as compared with 1986) plainly shows that, in terms of mortality, the unanticipated benefits of emission control far overshadow the intended benefits. With the spread of MVEG control these benefits will accrue worldwide.
77 FR 64367 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-19
... burden associated with money market funds' adoption of certain policies and procedures aimed at ensuring that these funds meet reasonably foreseeable shareholder redemptions (the ``general liquidity... complying with the general liquidity requirement. Approximately 10 money market funds were newly registered...
ERIC Educational Resources Information Center
Wolfensberger, Wolf
1984-01-01
The author estimates that approximately 200,000 lives of devalued disabled people (including infants and older adults) are taken or abbreviated annually through euthanasia and termination of life-supporting measures. He cites possible reasons for limited public outcry against what he compares with the holocaust. (CL)
SPARC GENERATED CHEMICAL PROPERTIES DATABASE FOR USE IN NATIONAL RISK ASSESSMENTS
The SPARC (Sparc Performs Automated Reasoning in Chemistry) Model was used to provide temperature dependent algorithms used to estimate chemical properties for approximately 200 chemicals of interest to the promulgation of the Hazardous Waste Identification Rule (HWIR) . Proper...
The Reduced Basis Method in Geosciences: Practical examples for numerical forward simulations
NASA Astrophysics Data System (ADS)
Degen, D.; Veroy, K.; Wellmann, F.
2017-12-01
Due to the highly heterogeneous character of the earth's subsurface, the complex coupling of thermal, hydrological, mechanical, and chemical processes, and the limited accessibility we have to face high-dimensional problems associated with high uncertainties in geosciences. Performing the obviously necessary uncertainty quantifications with a reasonable number of parameters is often not possible due to the high-dimensional character of the problem. Therefore, we are presenting the reduced basis (RB) method, being a model order reduction (MOR) technique, that constructs low-order approximations to, for instance, the finite element (FE) space. We use the RB method to address this computationally challenging simulations because this method significantly reduces the degrees of freedom. The RB method is decomposed into an offline and online stage, allowing to make the expensive pre-computations beforehand to get real-time results during field campaigns. Generally, the RB approach is most beneficial in the many-query and real-time context.We will illustrate the advantages of the RB method for the field of geosciences through two examples of numerical forward simulations.The first example is a geothermal conduction problem demonstrating the implementation of the RB method for a steady-state case. The second examples, a Darcy flow problem, shows the benefits for transient scenarios. In both cases, a quality evaluation of the approximations is given. Additionally, the runtimes for both the FE and the RB simulations are compared. We will emphasize the advantages of this method for repetitive simulations by showing the speed-up for the RB solution in contrast to the FE solution. Finally, we will demonstrate how the used implementation is usable in high-performance computing (HPC) infrastructures and evaluate its performance for such infrastructures. Hence, we will especially point out its scalability, yielding in an optimal usage on HPC infrastructures and normal working stations.
Howard, George; Moy, Claudia S.; Howard, Virginia J.; McClure, Leslie A.; Kleindorfer, Dawn O.; Kissela, Brett M.; Judd, Suzanne E.; Unverzagt, Fredrick W.; Soliman, Elsayed Z.; Safford, Monika M.; Cushman, Mary; Flaherty, Matthew L.; Wadley, Virginia G.
2016-01-01
Background and Purpose At age 45, Blacks have a stroke mortality approximately 3-times greater than their White counterparts, with a declining disparity at older ages. We assess whether this Black-White disparity in stroke mortality is attributable to a Black-White disparity in stroke incidence versus a disparity in case-fatality. Methods We first assess if Black-White differences in stroke mortality within 29,681 participants in the REasons for Geographic And Racial Differences in Stroke (REGARDS) cohort reflect national Black-White differences in stroke mortality, and then assess the degree to which Black-White differences in stroke incidence or 30-day case-fatality after stroke contribute to the disparities in stroke mortality. Results The pattern of stroke mortality within the study mirrors the national pattern, with the Black-to-White hazard ratio of approximately 4.0 at age 45 decreasing to approximately 1.0 at age 85. The pattern of Black-to-White disparities in stroke incidence shows a similar pattern, but no evidence of a corresponding disparity in stroke case-fatality. Discussion These findings show that the Black-White differences in stroke mortality are largely driven by differences in stroke incidence, with case fatality playing at most a minor role. Therefore to reduce the Black-White disparity in stroke mortality, interventions need to focus on prevention of stroke in Blacks. PMID:27256672
Research of Litchi Diseases Diagnosis Expertsystem Based on Rbr and Cbr
NASA Astrophysics Data System (ADS)
Xu, Bing; Liu, Liqun
To conquer the bottleneck problems existing in the traditional rule-based reasoning diseases diagnosis system, such as low reasoning efficiency and lack of flexibility, etc.. It researched the integrated case-based reasoning (CBR) and rule-based reasoning (RBR) technology, and put forward a litchi diseases diagnosis expert system (LDDES) with integrated reasoning method. The method use data mining and knowledge obtaining technology to establish knowledge base and case library. It adopt rules to instruct the retrieval and matching for CBR, and use association rule and decision trees algorithm to calculate case similarity.The experiment shows that the method can increase the system's flexibility and reasoning ability, and improve the accuracy of litchi diseases diagnosis.
NASA Astrophysics Data System (ADS)
Mulopo, Moses M.; Seymour Fowler, H.
This study examined the differential effectiveness of traditional and discovery methods of instruction for the teaching of science concepts, understandings about science, and scientific attitudes, to learners at the concrete and formal level of cognitive development. The dependent variables were achievement, understanding science, and scientific attitude; assessed through the use of the ACS Achievement Test (high school chemistry, Form 1979), the Test on Understanding Science (Form W), and the Test on Scientific Attitude, respectively. Mode of instruction and cognitive development were the independent variables. Subjects were 120 Form IV (11th grade) males enrolled in chemistry classes in Lusaka, Zambia. Sixty of these were concrete reasoners (mean age = 18.23) randomly selected from one of the two schools. The remaining 60 subjects were formal reasoners (mean age 18.06) randomly selected from a second boys' school. Each of these two groups was randomly split into two subgroups with 30 subjects. Traditional and discovery approaches were randomly assigned to the two subgroups of concrete reasoners and to the two subgroups of formal reasoners. Prior to instruction, the subjects were pretested using the ACS Achievement Test, the Test on Understanding Science, and the Test on Scientific Attitude. Subjects received instruction covering eight chemistry topics during approximately 10 weeks. Posttests followed using the same standard tests. Two-way analysis of covariance, with pretest scores serving as covariates was used and 0.05 level of significant was accepted. Tukey WSD technique was used as a follow-up test where applicable. It was found that (1) for the formal reasoners, the discovery group earned significantly higher understanding science scores than the traditional group. For the concrete reasoners mode of instruction did not make a difference; (2) overall, formal reasoners earned significantly higher achievement scores than concrete reasoners; (3) in general, subjects taught by the discovery approach earned significantly higher scientific attitude scores than those taught by the traditional approach. The traditional group outperformed the discovery group in achievement scores. It was concluded that the traditional approach might be an efficient instructional mode for the teaching of scientific facts and principles to high school students, while the discovery approach seemed to be more suitable for teaching scientific attitudes and for promoting understanding about science and scientists among formal operational learners.
Transformation of two and three-dimensional regions by elliptic systems
NASA Technical Reports Server (NTRS)
Mastin, C. Wayne
1991-01-01
A reliable linear system is presented for grid generation in 2-D and 3-D. The method is robust in the sense that convergence is guaranteed but is not as reliable as other nonlinear elliptic methods in generating nonfolding grids. The construction of nonfolding grids depends on having reasonable approximations of cell aspect ratios and an appropriate distribution of grid points on the boundary of the region. Some guidelines are included on approximating the aspect ratios, but little help is offered on setting up the boundary grid other than to say that in 2-D the boundary correspondence should be close to that generated by a conformal mapping. It is assumed that the functions which control the grid distribution depend only on the computational variables and not on the physical variables. Whether this is actually the case depends on how the grid is constructed. In a dynamic adaptive procedure where the grid is constructed in the process of solving a fluid flow problem, the grid is usually updated at fixed iteration counts using the current value of the control function. Since the control function is not being updated during the iteration of the grid equations, the grid construction is a linear procedure. However, in the case of a static adaptive procedure where a trial solution is computed and used to construct an adaptive grid, the control functions may be recomputed at every step of the grid iteration.
Nolte, Guido
2003-11-21
The equation for the magnetic lead field for a given magnetoencephalography (MEG) channel is well known for arbitrary frequencies omega but is not directly applicable to MEG in the quasi-static approximation. In this paper we derive an equation for omega = 0 starting from the very definition of the lead field instead of using Helmholtz's reciprocity theorems. The results are (a) the transpose of the conductivity times the lead field is divergence-free, and (b) the lead field differs from the one in any other volume conductor by a gradient of a scalar function. Consequently, for a piecewise homogeneous and isotropic volume conductor, the lead field is always tangential at the outermost surface. Based on this theoretical result, we formulated a simple and fast method for the MEG forward calculation for one shell of arbitrary shape: we correct the corresponding lead field for a spherical volume conductor by a superposition of basis functions, gradients of harmonic functions constructed here from spherical harmonics, with coefficients fitted to the boundary conditions. The algorithm was tested for a prolate spheroid of realistic shape for which the analytical solution is known. For high order in the expansion, we found the solutions to be essentially exact and for reasonable accuracies much fewer multiplications are needed than in typical implementations of the boundary element methods. The generalization to more shells is straightforward.
First-order analytic propagation of satellites in the exponential atmosphere of an oblate planet
NASA Astrophysics Data System (ADS)
Martinusi, Vladimir; Dell'Elce, Lamberto; Kerschen, Gaëtan
2017-04-01
The paper offers the fully analytic solution to the motion of a satellite orbiting under the influence of the two major perturbations, due to the oblateness and the atmospheric drag. The solution is presented in a time-explicit form, and takes into account an exponential distribution of the atmospheric density, an assumption that is reasonably close to reality. The approach involves two essential steps. The first one concerns a new approximate mathematical model that admits a closed-form solution with respect to a set of new variables. The second step is the determination of an infinitesimal contact transformation that allows to navigate between the new and the original variables. This contact transformation is obtained in exact form, and afterwards a Taylor series approximation is proposed in order to make all the computations explicit. The aforementioned transformation accommodates both perturbations, improving the accuracy of the orbit predictions by one order of magnitude with respect to the case when the atmospheric drag is absent from the transformation. Numerical simulations are performed for a low Earth orbit starting at an altitude of 350 km, and they show that the incorporation of drag terms into the contact transformation generates an error reduction by a factor of 7 in the position vector. The proposed method aims at improving the accuracy of analytic orbit propagation and transforming it into a viable alternative to the computationally intensive numerical methods.
NASA Astrophysics Data System (ADS)
Rezaeian, P.; Ataenia, V.; Shafiei, S.
2017-12-01
In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.
Real-time terrain rendering for interactive visualization remains a demanding task. We present a novel algorithm with several advantages over previous methods: our method is unusually stingy with polygons yet achieves real-time performance and is scalable to arbitrary regions and resolutions. The method provides a continuous terrain mesh of specified triangle count having provably minimum error in restricted but reasonably general classes of permissible meshes and error metrics. Our method provides an elegant solution to guaranteeing certain elusive types of consistency in scenes produced by multiple scene generators which share a common finest-resolution database but which otherwise operate entirely independently. Thismore » consistency is achieved by exploiting the freedom of choice of error metric allowed by the algorithm to provide, for example, multiple exact lines-of-sight in real-time. Our methods rely on an off-line pre-processing phase to construct a multi-scale data structure consisting of triangular terrain approximations enhanced ({open_quotes}thickened{close_quotes}) with world-space error information. In real time, this error data is efficiently transformed into screen-space where it is used to guide a greedy top-down triangle subdivision algorithm which produces the desired minimal error continuous terrain mesh. Our algorithm has been implemented and it operates at real-time rates.« less
Comparison of toxicity test methods using embryos of the grass shrimp, Palaemonetes pugio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rayburn, J.R.; Fisher, W.S.; Foss, S.S.
The embryos of the grass shrimp (Palaemonetes pugio) have shown sensitivity to the water soluble fraction of number 2 fuel oil (WSF{sub oil}). To determine the repeatability and versatility of the grass shrimp in bioassays, detailed concentration-response curves were performed using altered test methods. These alterations were to make the test system easier to use, require less volume of toxic material and to shorten the time of the assay. LC50 values for each method were obtained. The methods evaluated the differences between altering time of exposure from 12 to 4 days. The 4-day assay in 24-well plastic plates included themore » time of hatch, a critical life stage of these embryos. The average 12-day LC50 in the glass Leighton tubes was 11.8% VN WSF{sub oil}. The coefficient of variation of the individual test methods were approximately 25%, showing that the repeatability was reasonable for bioassays. These results show that a 4-day assay is practical for screening for the detection of number 2 fuel oil contamination. However, the 12-day assay may be necessary for detection of developmental abnormalities.« less
Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis
2017-01-01
Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley W.
2009-01-01
Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.
NASA Astrophysics Data System (ADS)
Grüning, M.; Gritsenko, O. V.; Baerends, E. J.
2002-04-01
An approximate Kohn-Sham (KS) exchange potential vxσCEDA is developed, based on the common energy denominator approximation (CEDA) for the static orbital Green's function, which preserves the essential structure of the density response function. vxσCEDA is an explicit functional of the occupied KS orbitals, which has the Slater vSσ and response vrespσCEDA potentials as its components. The latter exhibits the characteristic step structure with "diagonal" contributions from the orbital densities |ψiσ|2, as well as "off-diagonal" ones from the occupied-occupied orbital products ψiσψj(≠1)σ*. Comparison of the results of atomic and molecular ground-state CEDA calculations with those of the Krieger-Li-Iafrate (KLI), exact exchange (EXX), and Hartree-Fock (HF) methods show, that both KLI and CEDA potentials can be considered as very good analytical "closure approximations" to the exact KS exchange potential. The total CEDA and KLI energies nearly coincide with the EXX ones and the corresponding orbital energies ɛiσ are rather close to each other for the light atoms and small molecules considered. The CEDA, KLI, EXX-ɛiσ values provide the qualitatively correct order of ionizations and they give an estimate of VIPs comparable to that of the HF Koopmans' theorem. However, the additional off-diagonal orbital structure of vxσCEDA appears to be essential for the calculated response properties of molecular chains. KLI already considerably improves the calculated (hyper)polarizabilities of the prototype hydrogen chains Hn over local density approximation (LDA) and standard generalized gradient approximations (GGAs), while the CEDA results are definitely an improvement over the KLI ones. The reasons of this success are the specific orbital structures of the CEDA and KLI response potentials, which produce in an external field an ultranonlocal field-counteracting exchange potential.
Analyzing the errors of DFT approximations for compressed water systems
NASA Astrophysics Data System (ADS)
Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.
2014-07-01
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the clusters.
Analyzing the errors of DFT approximations for compressed water systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfè, D.; London Centre for Nanotechnology, UCL, London WC1H 0AH; Thomas Young Centre, UCL, London WC1H 0AH
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed watermore » clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid and the clusters.« less
Shriver, K A
1986-01-01
Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.
Proposal for a Joint NASA/KSAT Ka-band RF Propagation Terminal at Svalbard, Norway
NASA Technical Reports Server (NTRS)
Volosin, Jeffrey; Acosta, Roberto; Nessel, James; McCarthy, Kevin; Caroglanian, Armen
2010-01-01
This slide presentation discusses the placement of a Ka-band RF Propagation Terminal at Svalbard, Norway. The Near Earth Network (NEN) station would be managed by Kongsberg Satellite Services (KSAT) and would benefit NASA and KSAT. There are details of the proposed NASA/KSAT campaign, and the responsibilities each would agree to. There are several reasons for the placement, a primary reason is comparison with the Alaska site, Based on climatological similarities/differences with Alaska, Svalbard site expected to have good radiometer/beacon agreement approximately 99% of time.
Using new aggregation operators in rule-based intelligent control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Chen, Yung-Yaw; Yager, Ronald R.
1990-01-01
A new aggregation operator is applied in the design of an approximate reasoning-based controller. The ordered weighted averaging (OWA) operator has the property of lying between the And function and the Or function used in previous fuzzy set reasoning systems. It is shown here that, by applying OWA operators, more generalized types of control rules, which may include linguistic quantifiers such as Many and Most, can be developed. The new aggregation operators, as tested in a cart-pole balancing control problem, illustrate improved performance when compared with existing fuzzy control aggregation schemes.
Rowe, Christopher; Gunier, Robert; Bradman, Asa; Harley, Kim G.; Kogut, Katherine; Parra, Kimberly; Eskenazi, Brenda
2016-01-01
Background Low-income communities and communities of color have been shown to experience disproportionate exposure to agricultural pesticides, which have been linked to poorer neurobehavioral outcomes in infants and children. Few studies have assessed health impacts of pesticide mixtures in the context of socioeconomic adversity. Objectives To examine associations between residential proximity to toxicity-weighted organophosphate (OP) and carbamate pesticide use during pregnancy, household- and neighborhood-level poverty during childhood, and IQ scores in 10-year-old children. Methods We evaluated associations between both nearby agricultural pesticide use and poverty measures and cognitive abilities in 10-year-old children (n = 501) using data from a longitudinal birth cohort study linked with data from the California Pesticide Use Reporting system and the American Community Survey. Associations were assessed using multivariable linear regression. Results Children of mothers in the highest quartile compared to the lowest quartile of proximal pesticide use had lower performance on Full Scale IQ [β = −3.0; 95% Confidence Interval (CI) = (−5.6, −0.3)], Perceptual Reasoning [β = −4.0; (−7.6, −0.4)], and Working Memory [β = −2.8; (−5.6, −0.1)]. Belonging to a household earning an income at or below the poverty threshold was associated with approximately two point lower scores on Full Scale IQ, Verbal Comprehension, and Working Memory. Living in the highest quartile of neighborhood poverty at age 10 was associated with approximately four point lower performance on Full Scale IQ, Verbal Comprehension, Perceptual Reasoning, and Working memory. Conclusions Residential proximity to OP and carbamate pesticide use during pregnancy and both household- and neighborhood-level poverty during childhood were independently associated with poorer cognitive functioning in children at 10 years of age. PMID:27281690
Sousa, Sérgio Filipe; Fernandes, Pedro Alexandrino; Ramos, Maria João
2009-12-31
Gas-phase optimization of single biological molecules and of small active-site biological models has become a standard approach in first principles computational enzymology. The important role played by the surrounding environment (solvent, enzyme, both) is normally only accounted for through higher-level single point energy calculations performed using a polarizable continuum model (PCM) and an appropriate dielectric constant with the gas-phase-optimized geometries. In this study we analyze this widely used approximation, by comparing gas-phase-optimized geometries with geometries optimized with different PCM approaches (and considering different dielectric constants) for a representative data set of 20 very important biological molecules--the 20 natural amino acids. A total of 323 chemical bonds and 469 angles present in standard amino acid residues were evaluated. The results show that the use of gas-phase-optimized geometries can in fact be quite a reasonable alternative to the use of the more computationally intensive continuum optimizations, providing a good description of bond lengths and angles for typical biological molecules, even for charged amino acids, such as Asp, Glu, Lys, and Arg. This approximation is particularly successful if the protonation state of the biological molecule could be reasonably described in vacuum, a requirement that was already necessary in first principles computational enzymology.
Invariant patterns in crystal lattices: Implications for protein folding algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
HART,WILLIAM E.; ISTRAIL,SORIN
2000-06-01
Crystal lattices are infinite periodic graphs that occur naturally in a variety of geometries and which are of fundamental importance in polymer science. Discrete models of protein folding use crystal lattices to define the space of protein conformations. Because various crystal lattices provide discretizations of the same physical phenomenon, it is reasonable to expect that there will exist invariants across lattices related to fundamental properties of the protein folding process. This paper considers whether performance-guaranteed approximability is such an invariant for HP lattice models. The authors define a master approximation algorithm that has provable performance guarantees provided that a specificmore » sublattice exists within a given lattice. They describe a broad class of crystal lattices that are approximable, which further suggests that approximability is a general property of HP lattice models.« less
SU-G-IeP3-04: Effective Dose Measurements in Fast Kvp Switch Dual Energy Computed Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raudabaugh, J; Moore, B; Nguyen, G
2016-06-15
Purpose: The objective of this study was two-fold: (a) to test a new approach to approximating organ dose by using the effective energy of the combined 80kV/140kV beam in dual-energy (DE) computed tomography (CT), and (b) to derive the effective dose (ED) in the abdomen-pelvis protocol in DECT. Methods: A commercial dual energy CT scanner was employed using a fast-kV switch abdomen/pelvis protocol alternating between 80 kV and 140 kV. MOSFET detectors were used for organ dose measurements. First, an experimental validation of the dose equivalency between MOSFET and ion chamber (as a gold standard) was performed using a CTDImore » phantom. Second, the ED of DECT scans was measured using MOSFET detectors and an anthropomorphic phantom. For ED calculations, an abdomen/pelvis scan was used using ICRP 103 tissue weighting factors; ED was also computed using the AAPM Dose Length Product (DLP) method and compared to the MOSFET value. Results: The effective energy was determined as 42.9 kV under the combined beam from half-value layer (HVL) measurement. ED for the dual-energy scan was calculated as 16.49 ± 0.04 mSv by the MOSFET method and 14.62 mSv by the DLP method. Conclusion: Tissue dose in the center of the CTDI body phantom was 1.71 ± 0.01 cGy (ion chamber) and 1.71 ± 0.06 (MOSFET) respectively; this validated the use of effective energy method for organ dose estimation. ED from the abdomen-pelvis scan was calculated as 16.49 ± 0.04 mSv by MOSFET and 14.62 mSv by the DLP method; this suggests that the DLP method provides a reasonable approximation to the ED.« less
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
DFT studies on the Al, B, and P doping of silicene
NASA Astrophysics Data System (ADS)
Hernández Cocoletzi, H.; Castellanos Águila, J. E.
2018-02-01
The search for efficient adsorbents of atoms and molecules has motivated the study of systems in the presence of defects. For this reason, we have investigated theoretically the creation of mono- and di-vacancies on single layer silicene, as well as the Al, B, and P doping of silicene. Using the first-principles method with the generalized gradient approximation in the parameterization of Perdew-Burke-Ernzerhof, we have found that Al, B, and P interact strongly with Si atoms. Besides, when the vacancies are generated, the dangling bonds are saturated in pairs to form new bonds. Optimal geometries, binding energies, density of states (DOS) and charge density are reported. The results suggest that new chemical modifications can be used to modify the electronic properties of single-layer silicene.
Sum-rule corrections: a route to error cancellations in correlation matrix renormalisation theory
NASA Astrophysics Data System (ADS)
Liu, C.; Liu, J.; Yao, Y. X.; Wang, C. Z.; Ho, K. M.
2017-03-01
We recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a more accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.
NASA Astrophysics Data System (ADS)
Cesarone, R. J.
An account is given of the method by which the 'energy gain' accruing to a spacecraft as a result of its 'gravity-assist', parabolic-trajectory flyby of a massive body, such as a planet. The procedure begins with the solution of the two-body portion of the problem, and the results thus obtained are used to calculate changes with respect to the other massive body in the overall scenario, namely the sun. Attention is given to the 'vector diagram' often used to display the gravity-assist effect. The present procedure is noted to be reasonably accurate for flybys in which the plane of the spacecraft's trajectory is approximately the same as that of the planet's orbit around the sun, or the ecliptic plane; this reduces the problem to one in two dimensions.
NASA Astrophysics Data System (ADS)
Zolotarev, Pavel; Eremin, Roman
2018-04-01
Modification of existing solid electrolyte and cathode materialsis a topic of interest for theoreticians and experimentalists. In particular, itrequires elucidation of the influence of dopants on the characteristics of thestudying materials. For the reason of high complexity of theconfigurational space of doped/deintercalated systems, application of thecomputer modeling approaches is hindered, despite significant advances ofcomputational facilities in last decades. In this study, we propose a scheme,which allows to reduce a set of structures of a modeled configurationalspace for the subsequent study by means of the time-consuming quantumchemistry methods. Application of the proposed approach is exemplifiedthrough the study of the configurational space of the commercialLiNi0.8Co0.15Al0.05O2 (NCA) cathode material approximant.
Compressed Sensing Quantum Process Tomography for Superconducting Quantum Gates
NASA Astrophysics Data System (ADS)
Rodionov, Andrey
An important challenge in quantum information science and quantum computing is the experimental realization of high-fidelity quantum operations on multi-qubit systems. Quantum process tomography (QPT) is a procedure devised to fully characterize a quantum operation. We first present the results of the estimation of the process matrix for superconducting multi-qubit quantum gates using the full data set employing various methods: linear inversion, maximum likelihood, and least-squares. To alleviate the problem of exponential resource scaling needed to characterize a multi-qubit system, we next investigate a compressed sensing (CS) method for QPT of two-qubit and three-qubit quantum gates. Using experimental data for two-qubit controlled-Z gates, taken with both Xmon and superconducting phase qubits, we obtain estimates for the process matrices with reasonably high fidelities compared to full QPT, despite using significantly reduced sets of initial states and measurement configurations. We show that the CS method still works when the amount of data is so small that the standard QPT would have an underdetermined system of equations. We also apply the CS method to the analysis of the three-qubit Toffoli gate with simulated noise, and similarly show that the method works well for a substantially reduced set of data. For the CS calculations we use two different bases in which the process matrix is approximately sparse (the Pauli-error basis and the singular value decomposition basis), and show that the resulting estimates of the process matrices match with reasonably high fidelity. For both two-qubit and three-qubit gates, we characterize the quantum process by its process matrix and average state fidelity, as well as by the corresponding standard deviation defined via the variation of the state fidelity for different initial states. We calculate the standard deviation of the average state fidelity both analytically and numerically, using a Monte Carlo method. Overall, we show that CS QPT offers a significant reduction in the needed amount of experimental data for two-qubit and three-qubit quantum gates.
Can the Equivalent Sphere Model Approximate Organ Doses in Space?
NASA Technical Reports Server (NTRS)
Lin, Zi-Wei
2007-01-01
For space radiation protection it is often useful to calculate dose or dose,equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to. simulate the BFO dose. However, many previous studies have concluded that a 5cm sphere gives very different dose values from the exact BFO values. One study [1] . concludes that a 9 cm sphere is a reasonable approximation for BFO'doses in solar particle event environments. In this study we use a deterministic radiation transport [2] to investigate the reason behind these observations and to extend earlier studies. We take different space radiation environments, including seven galactic cosmic ray environments and six large solar particle events, and calculate the dose and dose equivalent in the skin, eyes and BFO using their thickness distribution functions from the CAM (Computerized Anatomical Man) model [3] The organ doses have been evaluated with a water or aluminum shielding of an areal density from 0 to 20 g/sq cm. We then compare with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we address why the equivalent sphere model is not a good approximation in some cases. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for the eye or the skin. For galactic cosmic rays environments, the equivalent sphere model with an organ-specific constant radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of the eye or the skin, but is unacceptable for the dose of the eye or the skin. The ranges of the radius parameters are also being investigated, and the BFO radius parameters are found to be significantly, larger than 5 cm in all cases, consistent with the conclusion of an earlier study [I]. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and I I cm for the BFO, 3.7 to 4.8 cm for the eye, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose.
Unification of Gauge Couplings in the E{sub 6}SSM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athron, P.; King, S. F.; Luo, R.
2010-02-10
We argue that in the two--loop approximation gauge coupling unification in the exceptional supersymmetric standard model (E{sub 6}SSM) can be achieved for any phenomenologically reasonable value of alpha{sub 3}(M{sub Z}) consistent with the experimentally measured central value.
Using Order of Magnitude Calculations to Extend Student Comprehension of Laboratory Data
ERIC Educational Resources Information Center
Dean, Rob L.
2015-01-01
Author Rob Dean previously published an Illuminations article concerning "challenge" questions that encourage students to think imaginatively with approximate quantities, reasonable assumptions, and uncertain information. This article has promoted some interesting discussion, which has prompted him to present further examples. Examples…
Mathematically Talented Males and Females and Achievement in the High School Sciences.
ERIC Educational Resources Information Center
Benbow, Camilla Persson; Minor, Lola L.
1986-01-01
Using data on approximately 2,000 students drawn from three talent searches conducted by the Study of Mathematically Precocious Youth, this study investigated the relationship of possible sex differences in science achievement to sex differences in mathematical reasoning ability. (BS)
10 CFR 431.17 - Determination of efficiency.
Code of Federal Regulations, 2011 CFR
2011-01-01
... different horsepowers without duplication; (C) The basic models should be of different frame number series... be produced over a reasonable period of time (approximately 180 days), then each unit shall be tested... design may be substituted without requiring additional testing if the represented measures of energy...
Students' Moral Reasoning as Related to Cultural Background and Educational Experience.
ERIC Educational Resources Information Center
Bar-Yam, Miriam; And Others
The relationship between moral development and cultural and educational background is examined. Approximately 120 Israeli youth representing different social classes, sex, religious affiliation, and educational experience were interviewed. The youth interviewed included urban middle and lower class students, Kibbutz-born, Youth Aliyah…
Employer Sponsored Child Care: Issues and Options.
ERIC Educational Resources Information Center
Conroyd, S. Danielle
This presentation describes the child care center at Detroit's Mount Carmel Hospital, a division of the Sisters of Mercy Health Corporation employing approximately 1,550 women. Discussion focuses on reasons for establishing the center, facility acquisition, program details, program management, developmental philosophy, parent involvement, policy…
Computer program analyzes Buckling Of Shells Of Revolution with various wall construction, BOSOR
NASA Technical Reports Server (NTRS)
Almroth, B. O.; Bushnell, D.; Sobel, L. H.
1968-01-01
Computer program performs stability analyses for a wide class of shells without unduly restrictive approximations. The program uses numerical integration, finite difference of finite element techniques to solve with reasonable accuracy almost any buckling problem for shells exhibiting orthotropic behavior.
DOT National Transportation Integrated Search
2008-01-01
A mailed survey was sent to approximately twenty thousand citizens from District Four (Kansas City Area) residents in order to gather statistical evidence for : supporting or eliminating reasons for the satisfaction discrepancy between Kansas City Ar...
Program for Institutionalized Children, 1974-75.
ERIC Educational Resources Information Center
Ramsay, James G.
This program for institutionalized children, funded under the Elementary Secondary Education Act of 1965, involved approximately 2181 children in 35 institutions in the New York City metropolitan area. Children were institutionalized for a variety of reasons: they were orphaned, neglected, dependent, in need of supervision, or emotionally…
Microencapsulation by Membrane Emulsification of Biophenols Recovered from Olive Mill Wastewaters
Piacentini, Emma; Poerio, Teresa; Bazzarelli, Fabio; Giorno, Lidietta
2016-01-01
Biophenols are highly prized for their free radical scavenging and antioxidant activities. Olive mill wastewaters (OMWWs) are rich in biophenols. For this reason, there is a growing interest in the recovery and valorization of these compounds. Applications for the encapsulation have increased in the food industry as well as the pharmaceutical and cosmetic fields, among others. Advancements in micro-fabrication methods are needed to design new functional particles with target properties in terms of size, size distribution, and functional activity. This paper describes the use of the membrane emulsification method for the fine-tuning of microparticle production with biofunctional activity. In particular, in this pioneering work, membrane emulsification has been used as an advanced method for biophenols encapsulation. Catechol has been used as a biophenol model, while a biophenols mixture recovered from OMWWs were used as a real matrix. Water-in-oil emulsions with droplet sizes approximately 2.3 times the membrane pore diameter, a distribution span of 0.33, and high encapsulation efficiency (98% ± 1% and 92% ± 3%, for catechol and biophenols, respectively) were produced. The release of biophenols was also investigated. PMID:27171115
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
Channel Temperature Determination for AlGaN/GaN HEMTs on SiC and Sapphire
NASA Technical Reports Server (NTRS)
Freeman, Jon C.; Mueller, Wolfgang
2008-01-01
Numerical simulation results (with emphasis on channel temperature) for a single gate AlGaN/GaN High Electron Mobility Transistor (HEMT) with either a sapphire or SiC substrate are presented. The static I-V characteristics, with concomitant channel temperatures (T(sub ch)) are calculated using the software package ATLAS, from Silvaco, Inc. An in-depth study of analytical (and previous numerical) methods for the determination of T(sub ch) in both single and multiple gate devices is also included. We develop a method for calculating T(sub ch) for the single gate device with the temperature dependence of the thermal conductivity of all material layers included. We also present a new method for determining the temperature on each gate in a multi-gate array. These models are compared with experimental results, and show good agreement. We demonstrate that one may obtain the channel temperature within an accuracy of +/-10 C in some cases. Comparisons between different approaches are given to show the limits, sensitivities, and needed approximations, for reasonable agreement with measurements.
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
Schillaci, Michael A; Schillaci, Mario E
2009-02-01
The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.
First Experiences Of ISFOC In The Maintenance Of CPV Plants
NASA Astrophysics Data System (ADS)
Sánchez, D.; Martínez, M.; Gil, E.; Rubio, F.; Pachón, J. L.; Banda, P.
2010-10-01
ISFOC CPV Plants are now working in normal operation, so ISFOC is beginning with the Maintenance. Like a first approximation, we have analyzed the incidences in the Energy generation. This analysis results that the tracker is the most vulnerable element of the installation, what it is reasonable, because it is the only mechanical element. In this business, the corrective actions for the Maintenance are very expensive; therefore it is mandatory to define a good policy for the Preventive Maintenance. With this idea, ISFOC is implementing Industrial tools (SCP) to control the Energy generation in order to maintain the Plants operating at its full potential. In this paper, we present the approach we are making to adapt the method to measure the energy generated by the concentrators and how the control charts can be used to organize the Preventive Maintenance actions. Finally, the first results are presented, where we show the potential of this method to organize the Maintenance at the same time we have initial calculations for the availability of the plants obtained with this method.
NASA Astrophysics Data System (ADS)
Hooshyar, M.; Wang, D.
2016-12-01
The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: 1) the soil is saturated at the land surface; and 2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.
NASA Astrophysics Data System (ADS)
Hooshyar, Milad; Wang, Dingbao
2016-08-01
The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: (1) the soil is saturated at the land surface; and (2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.
Learning and tuning fuzzy logic controllers through reinforcements
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap
1992-01-01
This paper presents a new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system. In particular, our generalized approximate reasoning-based intelligent control (GARIC) architecture (1) learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward neural network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto et al. (1983) to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
Using fuzzy logic to integrate neural networks and knowledge-based systems
NASA Technical Reports Server (NTRS)
Yen, John
1991-01-01
Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.
NASA Technical Reports Server (NTRS)
Kirschman, Randall K.; Sokolowski, Witold M.; Kolawa, Elizabeth A.
1999-01-01
Active thermal control for electronics on Mars Rovers imposes a serious penalty in weight, volume, power consumption, and reliability. Thus, we propose that thermal control be eliminated for future Rovers. From a functional standpoint there is no reason that the electronics could not operate over the entire temperature range of the Martian environment, which can vary from a low of approximately equal -90 C to a high of approximately equal +20 C during the Martian night and day. The upper end of this range is well within that for conventional electronics. Although the lower end is considerably below that for which conventional--even high-reliability electronics is designed or tested, it is well established that electronic devices can operate to such low temperatures. The primary concern is reliability of the overall electronic system, especially in regard to the numerous daily temperature cycles that it would experience over the duration of a mission on Mars. Accordingly, key reliability issues have been identified for elimination of thermal control on future Mars Rovers. One of these is attachment of semiconductor die onto substrates and into packages. Die attachment is critical since it forms a mechanical, thermal and electrical interface between the electronic device and the substrate or package. This paper summarizes our initial investigation of existing information related to this issue, in order to form an opinion whether die attachment techniques exist, or could be developed with reasonable effort, to withstand the Mars thermal environment for a mission duration of approximately I year. Our conclusion, from a review of literature and personal contacts. is that die attachment can be made sufficiently reliable to satisfy the requirements of future Mars Rovers. Moreover, it appears that there are several possible techniques from which to choose and that the requirements could be met by judicious selection from existing methods using hard solders, soft solders, or organic adhesives. Thus from the standpoint of die attachment. it appears feasible to eliminate thermal control for Rover electronics. We recommend that this be further investigated and verified for the specific hardware and thermal conditions appropriate to Mars Rovers.
Lee, Jung Ah; Lee, Sungkyu; Cho, Hong-Jun
2017-01-01
Introduction: The prevalence of adolescent electronic cigarette (e-cigarette) use has increased in most countries. This study aims to determine the relation between the frequency of e-cigarette use and the frequency and intensity of cigarette smoking. Additionally, the study evaluates the association between the reasons for e-cigarette use and the frequency of its use. Materials and Methods: Using the 2015 Korean Youth Risk Behavior Web-Based Survey, we included 6655 adolescents with an experience of e-cigarette use who were middle and high school students aged 13–18 years. We compared smoking experience, the frequency and intensity of cigarette smoking, and the relation between the reasons for e-cigarette uses and the frequency of e-cigarette use. Results: The prevalence of e-cigarette ever and current (past 30 days) users were 10.1% and 3.9%, respectively. Of the ever users, approximately 60% used e-cigarettes not within 1 month. On the other hand, 8.1% used e-cigarettes daily. The frequent and intensive cigarette smoking was associated with frequent e-cigarette uses. The percentage of frequent e-cigarette users (≥10 days/month) was 3.5% in adolescents who did not smoke within a month, but 28.7% among daily smokers. Additionally, it was 9.1% in smokers who smoked less than 1 cigarette/month, but 55.1% in smokers who smoked ≥20 cigarettes/day. The most common reason for e-cigarette use was curiosity (22.9%), followed by the belief that they are less harmful than conventional cigarettes (18.9%), the desire to quit smoking (13.1%), and the capacity for indoor use (10.7%). Curiosity was the most common reason among less frequent e-cigarette users; however, the desire to quit smoking and the capacity for indoor use were the most common reasons among more frequent users. Conclusions: Results showed a positive relation between frequency or intensity of conventional cigarette smoking and the frequency of e-cigarette use among Korean adolescents, and frequency of e-cigarette use differed according to the reason for the use of e-cigarettes. PMID:28335449
An efficient linear-scaling CCSD(T) method based on local natural orbitals.
Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály
2013-09-07
An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)] and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)] with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Mo̸ller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.
NASA Astrophysics Data System (ADS)
Zeng, Jing; Huang, Handong; Li, Huijie; Miao, Yuxin; Wen, Junxiang; Zhou, Fei
2017-12-01
The main emphasis of exploration and development is shifting from simple structural reservoirs to complex reservoirs, which all have the characteristics of complex structure, thin reservoir thickness and large buried depth. Faced with these complex geological features, hydrocarbon detection technology is a direct indication of changes in hydrocarbon reservoirs and a good approach for delimiting the distribution of underground reservoirs. It is common to utilize the time-frequency (TF) features of seismic data in detecting hydrocarbon reservoirs. Therefore, we research the complex domain-matching pursuit (CDMP) method and propose some improvements. First is the introduction of a scale parameter, which corrects the defect that atomic waveforms only change with the frequency parameter. Its introduction not only decomposes seismic signal with high accuracy and high efficiency but also reduces iterations. We also integrate jumping search with ergodic search to improve computational efficiency while maintaining the reasonable accuracy. Then we combine the improved CDMP with the Wigner-Ville distribution to obtain a high-resolution TF spectrum. A one-dimensional modeling experiment has proved the validity of our method. Basing on the low-frequency domain reflection coefficient in fluid-saturated porous media, we finally get an approximation formula for the mobility attributes of reservoir fluid. This approximation formula is used as a hydrocarbon identification factor to predict deep-water gas-bearing sand of the M oil field in the South China Sea. The results are consistent with the actual well test results and our method can help inform the future exploration of deep-water gas reservoirs.
REASONS FOR ELECTRONIC CIGARETTE USE BEYOND CIGARETTE SMOKING CESSATION: A CONCEPT MAPPING APPROACH
Soule, Eric K.; Rosas, Scott R.; Nasim, Aashir
2016-01-01
Introduction Electronic cigarettes (ECIGs) continue to grow in popularity, however, limited research has examined reasons for ECIG use. Methods This study used an integrated, mixed-method participatory research approach called concept mapping (CM) to characterize and describe adults’ reasons for using ECIGs. A total of 108 adults completed a multi-module online CM study that consisted of brainstorming statements about their reasons for ECIG use, sorting each statement into conceptually similar categories, and then rating each statement based on whether it represented a reason why they have used an ECIG in the past month. Results Participants brainstormed a total of 125 unique statements related to their reasons for ECIG use. Multivariate analyses generated a map revealing 11, interrelated components or domains that characterized their reasons for use. Importantly, reasons related to Cessation Methods, Perceived Health Benefits, Private Regard, Convenience and Conscientiousness were rated significantly higher than other categories/types of reasons related to ECIG use (p<.05). There also were significant model differences in participants’ endorsement of reasons based on their demography and ECIG behaviors. Conclusions This study shows that ECIG users are motivated to use ECIGs for many reasons. ECIG regulations should address these reasons for ECIG use in addition to smoking cessation. PMID:26803400
Comment on “On the quantum theory of molecules” [J. Chem. Phys. 137, 22A544 (2012)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutcliffe, Brian T., E-mail: bsutclif@ulb.ac.be; Woolley, R. Guy
2014-01-21
In our previous paper [B. T. Sutcliffe and R. G. Woolley, J. Chem. Phys. 137, 22A544 (2012)] we argued that the Born-Oppenheimer approximation could not be based on an exact transformation of the molecular Schrödinger equation. In this Comment we suggest that the fundamental reason for the approximate nature of the Born-Oppenheimer model is the lack of a complete set of functions for the electronic space, and the need to describe the continuous spectrum using spectral projection.
Architectures and economics for pervasive broadband satellite networks
NASA Technical Reports Server (NTRS)
Staelin, D. H.; Harvey, R. L.
1979-01-01
The size of a satellite network necessary to provide pervasive high-data-rate business communications is estimated, and one possible configuration is described which could interconnect most organizations in the United States. Within an order of magnitude, such a network might reasonably have a capacity equivalent to 10,000 simultaneous 3-Mbps channels, and rely primarily upon a cluster of approximately 3-5 satellites in a single orbital slot. Nominal prices for 3-6 Mbps video conference services might then be approximately $2000 monthly lease charge plus perhaps 70 cents per minute one way.
Simple heuristic for the viscosity of polydisperse hard spheres
NASA Astrophysics Data System (ADS)
Farr, Robert S.
2014-12-01
We build on the work of Mooney [Colloids Sci. 6, 162 (1951)] to obtain an heuristic analytic approximation to the viscosity of a suspension any size distribution of hard spheres in a Newtonian solvent. The result agrees reasonably well with rheological data on monodispserse and bidisperse hard spheres, and also provides an approximation to the random close packing fraction of polydisperse spheres. The implied packing fraction is less accurate than that obtained by Farr and Groot [J. Chem. Phys. 131(24), 244104 (2009)], but has the advantage of being quick and simple to evaluate.
Dhatt, Sharmistha; Bhattacharyya, Kamal
2012-08-01
Appropriate constructions of Padé approximants are believed to provide reasonable estimates of the asymptotic (large-coupling) amplitude and exponent of an observable, given its weak-coupling expansion to some desired order. In many instances, however, sequences of such approximants are seen to converge very poorly. We outline here a strategy that exploits the idea of fractional calculus to considerably improve the convergence behavior. Pilot calculations on the ground-state perturbative energy series of quartic, sextic, and octic anharmonic oscillators reveal clearly the worth of our endeavor.
Interaction function of oscillating coupled neurons
Dodla, Ramana; Wilson, Charles J.
2013-01-01
Large scale simulations of electrically coupled neuronal oscillators often employ the phase coupled oscillator paradigm to understand and predict network behavior. We study the nature of the interaction between such coupled oscillators using weakly coupled oscillator theory. By employing piecewise linear approximations for phase response curves and voltage time courses, and parameterizing their shapes, we compute the interaction function for all such possible shapes and express it in terms of discrete Fourier modes. We find that reasonably good approximation is achieved with four Fourier modes that comprise of both sine and cosine terms. PMID:24229210
Fusion Propulson System Requirements for an Interstellar Probe
NASA Technical Reports Server (NTRS)
Spencer, D. F.
1963-01-01
An examination of the engine constraints for a fusion-propelled vehicle indicates that minimum flight times for a probe to a 5 light-year star will be approximately 50 years. The principal restraint on the vehicle is the radiator weight and size necessary to dissipate the heat which enters the chamber walls from the fusion plasma. However, it is interesting, at least theoretically, that the confining magnetic field strength is of reasonable magnitude, 2 to 3 x 10(exp5) gauss, and the confinement time is approximately 0.1 sec.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
This paper describes an accurate economical method for generating approximations to the kernel of the integral equation relating unsteady pressure to normalwash in nonplanar flow. The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the non elementary integrals in the kernel by exponential approximations and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. Coefficients for 8, 12, 24, and 72 term approximations are tabulated in the report. Also, since the method is automated, it can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Effects of practice on the Wechsler Adult Intelligence Scale-IV across 3- and 6-month intervals.
Estevis, Eduardo; Basso, Michael R; Combs, Dennis
2012-01-01
A total of 54 participants (age M = 20.9; education M = 14.9; initial Full Scale IQ M = 111.6) were administered the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) at baseline and again either 3 or 6 months later. Scores on the Full Scale IQ, Verbal Comprehension, Working Memory, Perceptual Reasoning, Processing Speed, and General Ability Indices improved approximately 7, 5, 4, 5, 9, and 6 points, respectively, and increases were similar regardless of whether the re-examination occurred over 3- or 6-month intervals. Reliable change indices (RCI) were computed using the simple difference and bivariate regression methods, providing estimated base rates of change across time. The regression method provided more accurate estimates of reliable change than did the simple difference between baseline and follow-up scores. These findings suggest that prior exposure to the WAIS-IV results in significant score increments. These gains reflect practice effects instead of genuine intellectual changes, which may lead to errors in clinical judgment.
Experiences with insecticide-treated curtains: a qualitative study in Iquitos, Peru.
Paz-Soldan, Valerie A; Bauer, Karin M; Lenhart, Audrey; Cordova Lopez, Jhonny J; Elder, John P; Scott, Thomas W; McCall, Philip J; Kochel, Tadeusz J; Morrison, Amy C
2016-07-16
Dengue is an arthropod-borne viral disease responsible for approximately 400 million infections annually; the only available method of prevention is vector control. It has been previously demonstrated that insecticide treated curtains (ITCs) can lower dengue vector infestations in and around houses. As part of a larger trial examining whether ITCs could reduce dengue transmission in Iquitos, Peru, the objective of this study was to characterize the participants' experience with the ITCs using qualitative methods. Knowledge, attitudes, and practices (KAP) surveys (at baseline, and 9 and 27 months post-ITC distribution, with n = 593, 595 and 511, respectively), focus group discussions (at 6 and 12 months post-ITC distribution, with n = 18 and 33, respectively), and 11 one-on-one interviews (at 12 months post-distribution) were conducted with 605 participants who received ITCs as part of a cluster-randomized trial. Focus groups at 6 months post-ITC distribution revealed that individuals had observed their ITCs to function for approximately 3 months, after which they reported the ITCs were no longer working. Follow up revealed that the ITCs required re-treatment with insecticide at approximately 1 year post-distribution. Over half (55.3 %, n = 329) of participants at 9 months post-ITC distribution and over a third (34.8 %, n = 177) at 27 months post-ITC distribution reported perceiving a decrease in the number of mosquitoes in their home. The percentage of participants who would recommend ITCs to their family or friends in the future remained high throughout the study (94.3 %, n = 561 at 9 months and 94.6 %, n = 488 at 27 months post-distribution). When asked why, participants reported that ITCs were effective at reducing mosquitoes (81.6 and 37.8 %, at 9 and 27 months respectively), that they prevent dengue (5.7 and 51.2 %, at 9 and 27 months), that they are "beautiful" (5.9 and 3.1 %), as well as other reasons (6.9 and 2.5 %). ITCs have substantial potential for long term dengue vector control because they are liked by users, both for their perceived effectiveness and for aesthetic reasons, and because they require little proactive behavioral effort on the part of the users. Our results highlight the importance of gathering process (as opposed to outcome) data during vector control studies, without which researchers would not have become aware that the ITCs had lost effectiveness early in the trial.
Approximate Dispersion Relations for Waves on Arbitrary Shear Flows
NASA Astrophysics Data System (ADS)
Ellingsen, S. À.; Li, Y.
2017-12-01
An approximate dispersion relation is derived and presented for linear surface waves atop a shear current whose magnitude and direction can vary arbitrarily with depth. The approximation, derived to first order of deviation from potential flow, is shown to produce good approximations at all wavelengths for a wide range of naturally occuring shear flows as well as widely used model flows. The relation reduces in many cases to a 3-D generalization of the much used approximation by Skop (1987), developed further by Kirby and Chen (1989), but is shown to be more robust, succeeding in situations where the Kirby and Chen model fails. The two approximations incur the same numerical cost and difficulty. While the Kirby and Chen approximation is excellent for a wide range of currents, the exact criteria for its applicability have not been known. We explain the apparently serendipitous success of the latter and derive proper conditions of applicability for both approximate dispersion relations. Our new model has a greater range of applicability. A second order approximation is also derived. It greatly improves accuracy, which is shown to be important in difficult cases. It has an advantage over the corresponding second-order expression proposed by Kirby and Chen that its criterion of accuracy is explicitly known, which is not currently the case for the latter to our knowledge. Our second-order term is also arguably significantly simpler to implement, and more physically transparent, than its sibling due to Kirby and Chen.
Icon arrays help younger children's proportional reasoning.
Ruggeri, Azzurra; Vagharchakian, Laurianne; Xu, Fei
2018-06-01
We investigated the effects of two context variables, presentation format (icon arrays or numerical frequencies) and time limitation (limited or unlimited time), on the proportional reasoning abilities of children aged 7 and 10 years, as well as adults. Participants had to select, between two sets of tokens, the one that offered the highest likelihood of drawing a gold token, that is, the set of elements with the greater proportion of gold tokens. Results show that participants performed better in the unlimited time condition. Moreover, besides a general developmental improvement in accuracy, our results show that younger children performed better when proportions were presented as icon arrays, whereas older children and adults were similarly accurate in the two presentation format conditions. Statement of contribution What is already known on this subject? There is a developmental improvement in proportional reasoning accuracy. Icon arrays facilitate reasoning in adults with low numeracy. What does this study add? Participants were more accurate when they were given more time to make the proportional judgement. Younger children's proportional reasoning was more accurate when they were presented with icon arrays. Proportional reasoning abilities correlate with working memory, approximate number system, and subitizing skills. © 2018 The British Psychological Society.
NASA Technical Reports Server (NTRS)
Herbst, E.; Leung, C. M.
1986-01-01
In order to incorporate large ion-polar neutral rate coefficients into existing gas phase reaction networks, it is necessary to utilize simplified theoretical treatments because of the significant number of rate coefficients needed. The authors have used two simple theoretical treatments: the locked dipole approach of Moran and Hamill for linear polar neutrals and the trajectory scaling approach of Su and Chesnavich for nonlinear polar neutrals. The former approach is suitable for linear species because in the interstellar medium these are rotationally relaxed to a large extent and the incoming charged reactants can lock their dipoles into the lowest energy configuration. The latter approach is a better approximation for nonlinear neutral species, in which rotational relaxation is normally less severe and the incoming charged reactants are not as effective at locking the dipoles. The treatments are in reasonable agreement with more detailed long range theories and predict an inverse square root dependence on kinetic temperature for the rate coefficient. Compared with the locked dipole method, the trajectory scaling approach results in rate coefficients smaller by a factor of approximately 2.5.
Relativistic Confinement Resonances
NASA Astrophysics Data System (ADS)
Keating, David; Manson, Steven; Deshmukh, Pranawa
2017-04-01
Photoionization of confined atoms in a C60 fullerene have been under intense investigation in the recent years, in particular the confinement induced resonances, termed confinement resonances. The effects of the C60 potential are modeled by a static spherical well, with (in atomic units) inner radius r0 = 5.8, width Δ = 1.9, and depth U0 = -0.302, which is reasonable in the energy region well above the C60 plasmons. At very high Z, relativistic interactions become important contributors to even the qualitative nature of atomic properties; this is true for confined atomic properties as well. To explore the extent of these interactions, a theoretical study of several heavy atoms has been performed using the relativistic random phase approximation (RRPA) methodology. In order to determine which features in the photoionization cross section are due to relativity, calculations using the (nonrelativistic) random phase approximation with exchange method (RPAE) are performed for comparison. The existence of the second subshell of the spin-orbit-split doublets can induce new confinement resonances in the total cross section, which is the sum of the spin-orbit-split doublets, due to the shift in the doublet's threshold. Several examples for confined high-Z atoms are presented. Work supported by DOE and NSF.
NASA Astrophysics Data System (ADS)
Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan
2008-02-01
One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.
NASA Astrophysics Data System (ADS)
Grib, S. A.; Leora, S. N.
2017-12-01
Macroscopic discontinuous structures observed in the solar wind are considered in the framework of magnetic hydrodynamics. The interaction of strong discontinuities is studied based on the solution of the generalized Riemann-Kochin problem. The appearance of discontinuities inside the magnetosheath after the collision of the solar wind shock wave with the bow shock front is taken into account. The propagation of secondary waves appearing in the magnetosheath is considered in the approximation of one-dimensional ideal magnetohydrodynamics. The appearance of a compression wave reflected from the magnetopause is indicated. The wave can nonlinearly break with the formation of a backward shock wave and cause the motion of the bow shock towards the Sun. The interaction between shock waves is considered with the well-known trial calculation method. It is assumed that the velocity of discontinuities in the magnetosheath in the first approximation is constant on the average. All reasonings and calculations correspond to consideration of a flow region with a velocity less than the magnetosonic speed near the Earth-Sun line. It is indicated that the results agree with the data from observations carried out on the WIND and Cluster spacecrafts.
A combinatorial approach to protein docking with flexible side chains.
Althaus, Ernst; Kohlbacher, Oliver; Lenhof, Hans-Peter; Müller, Peter
2002-01-01
Rigid-body docking approaches are not sufficient to predict the structure of a protein complex from the unbound (native) structures of the two proteins. Accounting for side chain flexibility is an important step towards fully flexible protein docking. This work describes an approach that allows conformational flexibility for the side chains while keeping the protein backbone rigid. Starting from candidates created by a rigid-docking algorithm, we demangle the side chains of the docking site, thus creating reasonable approximations of the true complex structure. These structures are ranked with respect to the binding free energy. We present two new techniques for side chain demangling. Both approaches are based on a discrete representation of the side chain conformational space by the use of a rotamer library. This leads to a combinatorial optimization problem. For the solution of this problem, we propose a fast heuristic approach and an exact, albeit slower, method that uses branch-and-cut techniques. As a test set, we use the unbound structures of three proteases and the corresponding protein inhibitors. For each of the examples, the highest-ranking conformation produced was a good approximation of the true complex structure.
Electromagnetic launch of lunar material
NASA Technical Reports Server (NTRS)
Snow, William R.; Kolm, Henry H.
1992-01-01
Lunar soil can become a source of relatively inexpensive oxygen propellant for vehicles going from low Earth orbit (LEO) to geosynchronous Earth orbit (GEO) and beyond. This lunar oxygen could replace the oxygen propellant that, in current plans for these missions, is launched from the Earth's surface and amounts to approximately 75 percent of the total mass. The reason for considering the use of oxygen produced on the Moon is that the cost for the energy needed to transport things from the lunar surface to LEO is approximately 5 percent the cost from the surface of the Earth to LEO. Electromagnetic launchers, in particular the superconducting quenchgun, provide a method of getting this lunar oxygen off the lunar surface at minimal cost. This cost savings comes from the fact that the superconducting quenchgun gets its launch energy from locally supplied, solar- or nuclear-generated electrical power. We present a preliminary design to show the main features and components of a lunar-based superconducting quenchgun for use in launching 1-ton containers of liquid oxygen, one every 2 hours. At this rate, nearly 4400 tons of liquid oxygen would be launched into low lunar orbit in a year.
Fourth-order structural steganalysis and analysis of cover assumptions
NASA Astrophysics Data System (ADS)
Ker, Andrew D.
2006-02-01
We extend our previous work on structural steganalysis of LSB replacement in digital images, building detectors which analyse the effect of LSB operations on pixel groups as large as four. Some of the method previously applied to triplets of pixels carries over straightforwardly. However we discover new complexities in the specification of a cover image model, a key component of the detector. There are many reasonable symmetry assumptions which we can make about parity and structure in natural images, only some of which provide detection of steganography, and the challenge is to identify the symmetries a) completely, and b) concisely. We give a list of possible symmetries and then reduce them to a complete, non-redundant, and approximately independent set. Some experimental results suggest that all useful symmetries are thus described. A weighting is proposed and its approximate variance stabilisation verified empirically. Finally, we apply symmetries to create a novel quadruples detector for LSB replacement steganography. Experimental results show some improvement, in most cases, over other detectors. However the gain in performance is moderate compared with the increased complexity in the detection algorithm, and we suggest that, without new insight, further extension of structural steganalysis may provide diminishing returns.
On the Causes of Mid-Pliocene Warmth and Polar Amplification
NASA Technical Reports Server (NTRS)
Lunt, Daniel J.; Haywood, Alan M.; Schmidt, Gavin A.; Salzmann, Ulrich; Valdes, Paul J.; Dowsett, Harry J.; Loptson, Claire A.
2012-01-01
The mid-Pliocene (approximately 3 to 3.3 Ma ago), is a period of sustained global warmth in comparison to the late Quaternary (0 to approximately 1 Ma ago), and has potential to inform predictions of long-term future climate change. However, given that several processes potentially contributed, relatively little is understood about the reasons for the observed warmth, or the associated polar amplification. Here, using a modelling approach and a novel factorisation method, we assess the relative contributions to mid-Pliocene warmth from: elevated CO2, lowered orography, and vegetation and ice sheet changes. The results show that on a global scale, the largest contributor to mid-Pliocene warmth is elevated CO2. However, in terms of polar amplification, changes to ice sheets contribute significantly in the Southern Hemisphere, and orographic changes contribute significantly in the Northern Hemisphere. We also carry out an energy balance analysis which indicates that that on a global scale, surface albedo and atmospheric emmissivity changes dominate over cloud changes. We investigate the sensitivity of our results to uncertainties in the prescribed CO2 and orographic changes, to derive uncertainty ranges for the various contributing processes.
Surface geometry of protoplanetary disks inferred from near-infrared imaging polarimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takami, Michihiro; Hasegawa, Yasuhiro; Gu, Pin-Gao
2014-11-01
We present a new method of analysis for determining the surface geometry of five protoplanetary disks observed with near-infrared imaging polarimetry using Subaru-HiCIAO. Using as inputs the observed distribution of polarized intensity (PI), disk inclination, assumed properties for dust scattering, and other reasonable approximations, we calculate a differential equation to derive the surface geometry. This equation is numerically integrated along the distance from the star at a given position angle. We show that, using these approximations, the local maxima in the PI distribution of spiral arms (SAO 206462, MWC 758) and rings (2MASS J16042165-2130284, PDS 70) are associated with localmore » concave-up structures on the disk surface. We also show that the observed presence of an inner gap in scattered light still allows the possibility of a disk surface that is parallel to the light path from the star, or a disk that is shadowed by structures in the inner radii. Our analysis for rings does not show the presence of a vertical inner wall as often assumed in studies of disks with an inner gap. Finally, we summarize the implications of spiral and ring structures as potential signatures of ongoing planet formation.« less
Genome-wide association with delayed puberty in swine
USDA-ARS?s Scientific Manuscript database
An improvement in the proportion of gilts entering the herd that farrow a litter would increase overall herd performance and profitability. A significant proportion (10-30%) of gilts that enter the herd never farrow a litter; reproductive reasons account for approximately a third of gilt removals, w...
School-University Partnerships in Action: Concepts, Cases,
ERIC Educational Resources Information Center
Sirotnik, Kenneth A., Ed.; Goodlad, John I., Ed.
A general paradigm for ideal collaboration between schools and universities is proposed. It is based on a mutually collaborative arrangement between equal partners working together to meet self-interests while solving common problems. It is suggested that reasonable approximations to this ideal have great potential to effect significant…
Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin
2013-01-01
Previous research has found a relationship between individual differences in children’s precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the present study we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of two years. Additionally, at the last time point, we tested children’s informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3; Ginsburg & Baroody, 2003). We found that children’s numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned, non-symbolic system of quantity representation and the system of mathematical reasoning that children come to master through instruction. PMID:24076381
Approximation algorithms for planning and control
NASA Technical Reports Server (NTRS)
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin
2013-12-01
Previous research has found a relationship between individual differences in children's precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the current study, we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of 2years. In addition, at the final time point, we tested children's informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3). We found that children's numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned nonsymbolic system of quantity representation and the system of mathematics reasoning that children come to master through instruction. Copyright © 2013 Elsevier Inc. All rights reserved.
Induced mutations in mice and genetic risk assessment in humans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Selby, P.B.
1980-01-01
In studies on mice, in contrast to studies on humans, it is possible to perform carefully controlled experiments with the exposures one desires. The necessity for having separate mammalian tests for looking at the induction of gene mutations and small deficiencies, and at the induction of chromosomal aberrations, is obvious. Mutagens can differ as to which of these types of damage they are more likely to cause. The reason for focusing attention on the mouse in a discussion of hazard from induced gene mutations and small deficiencies is the existence of techniques in this mammal for readily studying the inductionmore » of such genetic effects. Many mutations at the molecular level cause no apparent changes at the gene-product level and many mutations that cause changes at the gene-product level cause no detectable phenotypic changes in heterozygotes. Many dominant mutations that change the phenotype cause no serious handicap. For these reasons, risk estimation for important chemicals must rely heavily on studies on the induction of those germinal mutations in mammals that are easily related to human dominant disorders, such as skeletal and cataract mutations. Molecular or enzyme studies cannot provide definitive answers about risk. The specific-locus method should help greatly in assessing the genetic risks to humans from chemicals. The new sensitive-indicator method should complement it in providing a tool for attacking the question of what treatments induce gene mutations and small deficiencies and for approximating first-generation damage to the skeleton. (ERB)« less
The added mass forces in insect flapping wings.
Liu, Longgui; Sun, Mao
2018-01-21
The added mass forces of three-dimensional (3D) flapping wings of some representative insects, and the accuracy of the often used simple two-dimensional (2D) method, are studied. The added mass force of a flapping wing is calculated by both 3D and 2D methods, and the total aerodynamic force of the wing is calculated by the CFD method. Our findings are as following. The added mass force has a significant contribution to the total aerodynamic force of the flapping wings during and near the stroke reversals, and the shorter the stroke amplitude is, the larger the added mass force becomes. Thus the added mass force could not be neglected when using the simple models to estimate the aerodynamics force, especially for insects with relatively small stroke amplitudes. The accuracy of the often used simple 2D method is reasonably good: when the aspect ratio of the wing is greater than about 3.3, error in the added mass force calculation due to the 2D assumption is less than 9%; even when the aspect ratio is 2.8 (approximately the smallest for an insect), the error is no more than 13%. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hybrid Method for Power Control Simulation of a Single Fluid Plasma Thruster
NASA Astrophysics Data System (ADS)
Jaisankar, S.; Sheshadri, T. S.
2018-05-01
Propulsive plasma flow through a cylindrical-conical diverging thruster is simulated by a power controlled hybrid method to obtain the basic flow, thermodynamic and electromagnetic variables. Simulation is based on a single fluid model with electromagnetics being described by the equations of potential Poisson, Maxwell and the Ohm's law while the compressible fluid dynamics by the Navier Stokes in cylindrical form. The proposed method solved the electromagnetics and fluid dynamics separately, both to segregate the two prominent scales for an efficient computation and for the delivery of voltage controlled rated power. The magnetic transport is solved for steady state while fluid dynamics is allowed to evolve in time along with an electromagnetic source using schemes based on generalized finite difference discretization. The multistep methodology with power control is employed for simulating fully ionized propulsive flow of argon plasma through the thruster. Numerical solution shows convergence of every part of the solver including grid stability causing the multistep hybrid method to converge for a rated power delivery. Simulation results are reasonably in agreement with the reported physics of plasma flow in the thruster thus indicating the potential utility of this hybrid computational framework, especially when single fluid approximation of plasma is relevant.
Effective Clipart Image Vectorization through Direct Optimization of Bezigons.
Yang, Ming; Chao, Hongyang; Zhang, Chi; Guo, Jun; Yuan, Lu; Sun, Jian
2016-02-01
Bezigons, i.e., closed paths composed of Bézier curves, have been widely employed to describe shapes in image vectorization results. However, most existing vectorization techniques infer the bezigons by simply approximating an intermediate vector representation (such as polygons). Consequently, the resultant bezigons are sometimes imperfect due to accumulated errors, fitting ambiguities, and a lack of curve priors, especially for low-resolution images. In this paper, we describe a novel method for vectorizing clipart images. In contrast to previous methods, we directly optimize the bezigons rather than using other intermediate representations; therefore, the resultant bezigons are not only of higher fidelity compared with the original raster image but also more reasonable because they were traced by a proficient expert. To enable such optimization, we have overcome several challenges and have devised a differentiable data energy as well as several curve-based prior terms. To improve the efficiency of the optimization, we also take advantage of the local control property of bezigons and adopt an overlapped piecewise optimization strategy. The experimental results show that our method outperforms both the current state-of-the-art method and commonly used commercial software in terms of bezigon quality.
A Gaussian beam method for ultrasonic non-destructive evaluation modeling
NASA Astrophysics Data System (ADS)
Jacquet, O.; Leymarie, N.; Cassereau, D.
2018-05-01
The propagation of high-frequency ultrasonic body waves can be efficiently estimated with a semi-analytic Dynamic Ray Tracing approach using paraxial approximation. Although this asymptotic field estimation avoids the computational cost of numerical methods, it may encounter several limitations in reproducing identified highly interferential features. Nevertheless, some can be managed by allowing paraxial quantities to be complex-valued. This gives rise to localized solutions, known as paraxial Gaussian beams. Whereas their propagation and transmission/reflection laws are well-defined, the fact remains that the adopted complexification introduces additional initial conditions. While their choice is usually performed according to strategies specifically tailored to limited applications, a Gabor frame method has been implemented to indiscriminately initialize a reasonable number of paraxial Gaussian beams. Since this method can be applied for an usefully wide range of ultrasonic transducers, the typical case of the time-harmonic piston radiator is investigated. Compared to the commonly used Multi-Gaussian Beam model [1], a better agreement is obtained throughout the radiated field between the results of numerical integration (or analytical on-axis solution) and the resulting Gaussian beam superposition. Sparsity of the proposed solution is also discussed.
Automatic Determination of the Conic Coronal Mass Ejection Model Parameters
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Oates, T.; Taktakishvili, A.
2009-01-01
Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szczurek, Antoni; University of Rzeszów; Cisek, Anna
We discuss production of four jets pp → jjjjX with at least two jets with large rapidity separation in proton-proton collisions at the LHC through the mechanism of double-parton scattering (DPS). The cross section is calculated in a factorizaed approximation. Each hard subprocess is calculated in LO collinear approximation. The LO pQCD calculations are shown to give a reasonably good descritption of CMS and ATLAS data on inclusive jet production. It is shown that relative contribution of DPS is growing with increasing rapidity distance between the most remote jets, center-of-mass energy and with decreasing (mini)jet transverse momenta. We show alsomore » result for angular azimuthal dijet correlations calculated in the framework of k{sub t} -factorization approximation.« less
Validity of the two-level approximation in the interaction of few-cycle light pulses with atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng Jing; Zhou Jianying
2003-04-01
The validity of the two-level approximation (TLA) in the interaction of atoms with few-cycle light pulses is studied by investigating a simple (V)-type three-level atom model. Even the transition frequency between the ground state and the third level is far away from the spectrum of the pulse; this additional transition can make the TLA inaccuracy. For a sufficiently large transition frequency or a weak coupling between the ground state and the third level, the TLA is a reasonable approximation and can be used safely. When decreasing the pulse width or increasing the pulse area, the TLA will give rise tomore » non-negligible errors compared with the precise results.« less
Validity of the two-level approximation in the interaction of few-cycle light pulses with atoms
NASA Astrophysics Data System (ADS)
Cheng, Jing; Zhou, Jianying
2003-04-01
The validity of the two-level approximation (TLA) in the interaction of atoms with few-cycle light pulses is studied by investigating a simple
New approximate orientation averaging of the water molecule interacting with the thermal neutron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markovic, M.I.; Minic, D.M.; Rakic, A.D.
1992-02-01
This paper reports that exactly describing the time of thermal neutron collisions with water molecules, orientation averaging is performed by an exact method (EOA{sub k}) and four approximate methods (two well known and two less known). Expressions for the microscopic scattering kernel are developed. The two well-known approximate orientation averaging methods are Krieger-Nelkin (K-N) and Koppel-Young (K-Y). The results obtained by one of the two proposed approximate orientation averaging methods agree best with the corresponding results obtained by EOA{sub k}. The largest discrepancies between the EOA{sub k} results and the results of the approximate methods are obtained using the well-knowmore » K-N approximate orientation averaging method.« less
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Analytical Phase Equilibrium Function for Mixtures Obeying Raoult's and Henry's Laws
NASA Astrophysics Data System (ADS)
Hayes, Robert
When a mixture of two substances exists in both the liquid and gas phase at equilibrium, Raoults and Henry's laws (ideal solution and ideal dilute solution approximations) can be used to estimate the gas and liquid mole fractions at the extremes of either very little solute or solvent. By assuming that a cubic polynomial can reasonably approximate the intermediate values to these extremes as a function of mole fraction, the cubic polynomial is solved and presented. A closed form equation approximating the pressure dependence on mole fraction of the constituents is thereby obtained. As a first approximation, this is a very simple and potentially useful means to estimate gas and liquid mole fractions of equilibrium mixtures. Mixtures with an azeotrope require additional attention if this type of approach is to be utilized. This work supported in part by federal Grant NRC-HQ-84-14-G-0059.
Angular momentum projection for a Nilsson mean-field plus pairing model
NASA Astrophysics Data System (ADS)
Wang, Yin; Pan, Feng; Launey, Kristina D.; Luo, Yan-An; Draayer, J. P.
2016-06-01
The angular momentum projection for the axially deformed Nilsson mean-field plus a modified standard pairing (MSP) or the nearest-level pairing (NLP) model is proposed. Both the exact projection, in which all intrinsic states are taken into consideration, and the approximate projection, in which only intrinsic states with K = 0 are taken in the projection, are considered. The analysis shows that the approximate projection with only K = 0 intrinsic states seems reasonable, of which the configuration subspace considered is greatly reduced. As simple examples for the model application, low-lying spectra and electromagnetic properties of 18O and 18Ne are described by using both the exact and approximate angular momentum projection of the MSP or the NLP, while those of 20Ne and 24Mg are described by using the approximate angular momentum projection of the MSP or NLP.
Nurses' behaviour regarding CPR and the theories of reasoned action and planned behaviour.
Dwyer, Trudy; Mosel Williams, Leonie
2002-01-01
Cardiopulmonary resuscitation (CPR) has been used in hospitals for approximately 40 years. Nurses are generally the first responders to a cardiac arrest and initiate basic life support while waiting for the advanced cardiac life support team to arrive. Speed and competence of the first responder are factors contributing to the initial survival of a person following a cardiac arrest. Attitudes of individual nurses may influence the speed and level of involvement in true emergency situations. This paper uses the theories of reasoned action and planned behaviour to examine some behavioural issues with CPR involvement.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Alfven wave cyclotron resonance heating
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, R.B.; Yosikawa, S.; Oberman, C.
1981-02-01
The resonance absorption of fast Alfven waves at the proton ctclotron resonance of a predominately deuterium plasma is investigated. An approximate dispersion relation is derived, valid in the vicinity of the resonance, which permits an exact calculation of transmission and reflection coefficients. For reasonable plasma parameters significant linear resonance absorption is found.
75 FR 35919 - Investment Company Advertising: Target Date Retirement Fund Names and Marketing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-23
... be misleading. The amendments are intended to provide enhanced information to investors concerning... intended as the approximate year of an investor's retirement, and an investor may use the date contained in... manner reasonably calculated to draw investor attention to the information is the same presentation...
Streamlining Your Emissions Inventory Updates
ERIC Educational Resources Information Center
Stokes, John
2011-01-01
Of the 677 school presidents that have signed on to the American College and University Presidents Climate Commitment (ACUPCC), approximately 200 of them are presidents of community colleges. This measure of involvement at the community college level is promising for two reasons: (1) these schools have emerged as a major provider of public higher…
Clinical Assessment Using the Clinical Rating Scale: Thomas and Olson Revisited.
ERIC Educational Resources Information Center
Lee, Robert E.; Jager, Kathleen Burns; Whiting, Jason B.; Kwantes, Catherine T.
2000-01-01
Examines whether the Clinical Rating Scale retains its validity when used by psychotherapists in their clinical practice. Confirmatory factor analysis reveals that data provides a reasonable approximation of the underlying factor structure. Concludes that although primarily considered a research instrument, the scale may have a role in clinical…
Why Adolescent Problem Gamblers Do Not Seek Treatment
ERIC Educational Resources Information Center
Ladouceur, Robert; Blaszczynski, Alexander; Pelletier, Amelie
2004-01-01
Prevalence studies indicate that approximately 40% of adolescents participate in regular gambling with rates of problem gambling up to four times greater than that found in adult populations. However, it appears that few adolescents actually seek treatment for such problems. The purpose of this study was to explore potential reasons why…
Information processing systems, reasoning modules, and reasoning system design methods
Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.
2016-08-23
Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Information processing systems, reasoning modules, and reasoning system design methods
Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.
2015-08-18
Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Information processing systems, reasoning modules, and reasoning system design methods
Hohimer, Ryan E; Greitzer, Frank L; Hampton, Shawn D
2014-03-04
Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Computational Methods for Configurational Entropy Using Internal and Cartesian Coordinates.
Hikiri, Simon; Yoshidome, Takashi; Ikeguchi, Mitsunori
2016-12-13
The configurational entropy of solute molecules is a crucially important quantity to study various biophysical processes. Consequently, it is necessary to establish an efficient quantitative computational method to calculate configurational entropy as accurately as possible. In the present paper, we investigate the quantitative performance of the quasi-harmonic and related computational methods, including widely used methods implemented in popular molecular dynamics (MD) software packages, compared with the Clausius method, which is capable of accurately computing the change of the configurational entropy upon temperature change. Notably, we focused on the choice of the coordinate systems (i.e., internal or Cartesian coordinates). The Boltzmann-quasi-harmonic (BQH) method using internal coordinates outperformed all the six methods examined here. The introduction of improper torsions in the BQH method improves its performance, and anharmonicity of proper torsions in proteins is identified to be the origin of the superior performance of the BQH method. In contrast, widely used methods implemented in MD packages show rather poor performance. In addition, the enhanced sampling of replica-exchange MD simulations was found to be efficient for the convergent behavior of entropy calculations. Also in folding/unfolding transitions of a small protein, Chignolin, the BQH method was reasonably accurate. However, the independent term without the correlation term in the BQH method was most accurate for the folding entropy among the methods considered in this study, because the QH approximation of the correlation term in the BQH method was no longer valid for the divergent unfolded structures.
Prediction of destabilizing blade tip forces for shrouded and unshrouded turbines
NASA Technical Reports Server (NTRS)
Qiu, Y. J.; Martinezsanchez, M.
1985-01-01
The effect of a nonuniform flow field on the Alford force calculation is investigated. The ideas used here are based on those developed by Horlock and Greitzer. It is shown that the nonuniformity of the flow field does contribute to the Alford force calculation. An attempt is also made to include the effect of whirl speed. The values predicted by the model are compared with those obtained experimentally by Urlicks and Wohlrab. The possibility of using existing turbine tip loss correlations to predict beta is also exploited. The nonuniform flow field induced by the tip clearnance variation tends to increase the resultant destabilizing force over and above what would be predicted on the basis of the local variation of efficiency. On the one hand, the pressure force due to the nonuniform inlet and exit pressure also plays a part even for unshrouded blades, and this counteracts the flow field effects, so that the simple Alford prediction remains a reasonable approximation. Once the efficiency variation with clearance is known, the presented model gives a slightly overpredicted, but reasonably accurate destabilizing force. In the absence of efficiency vs. clearance data, an empirical tip loss coefficient can be used to give a reasonable prediction of destabilizing force. To a first approximation, the whirl does have a damping effect, but only of small magnitude, and thus it can be ignored for some purposes.
Clinical reasoning and its application to nursing: concepts and research studies.
Banning, Maggi
2008-05-01
Clinical reasoning may be defined as "the process of applying knowledge and expertise to a clinical situation to develop a solution" [Carr, S., 2004. A framework for understanding clinical reasoning in community nursing. J. Clin. Nursing 13 (7), 850-857]. Several forms of reasoning exist each has its own merits and uses. Reasoning involves the processes of cognition or thinking and metacognition. In nursing, clinical reasoning skills are an expected component of expert and competent practise. Nurse research studies have identified concepts, processes and thinking strategies that might underpin the clinical reasoning used by pre-registration nurses and experienced nurses. Much of the available research on reasoning is based on the use of the think aloud approach. Although this is a useful method, it is dependent on ability to describe and verbalise the reasoning process. More nursing research is needed to explore the clinical reasoning process. Investment in teaching and learning methods is needed to enhance clinical reasoning skills in nurses.
Hu, Youxin; Shanjani, Yaser; Toyserkani, Ehsan; Grynpas, Marc; Wang, Rizhi; Pilliar, Robert
2014-02-01
Porous calcium polyphosphate (CPP) structures proposed as bone-substitute implants and made by sintering CPP powders to form bending test samples of approximately 35 vol % porosity were machined from preformed blocks made either by additive manufacturing (AM) or conventional gravity sintering (CS) methods and the structure and mechanical characteristics of samples so made were compared. AM-made samples displayed higher bending strengths (≈1.2-1.4 times greater than CS-made samples), whereas elastic constant (i.e., effective elastic modulus of the porous structures) that is determined by material elastic modulus and structural geometry of the samples was ≈1.9-2.3 times greater for AM-made samples. X-ray diffraction analysis showed that samples made by either method displayed the same crystal structure forming β-CPP after sinter annealing. The material elastic modulus, E, determined using nanoindentation tests also showed the same value for both sample types (i.e., E ≈ 64 GPa). Examination of the porous structures indicated that significantly larger sinter necks resulted in the AM-made samples which presumably resulted in the higher mechanical properties. The development of mechanical properties was attributed to the different sinter anneal procedures required to make 35 vol % porous samples by the two methods. A primary objective of the present study, in addition to reporting on bending strength and sample stiffness (elastic constant) characteristics, was to determine why the two processes resulted in the observed mechanical property differences for samples of equivalent volume percentage of porosity. An understanding of the fundamental reason(s) for the observed effect is considered important for developing improved processes for preparation of porous CPP implants as bone substitutes for use in high load-bearing skeletal sites. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Christian, Karen Jeanne
2011-12-01
Students often use study groups to prepare for class or exams; yet to date, we know very little about how these groups actually function. This study looked at the ways in which undergraduate organic chemistry students prepared for exams through self-initiated study groups. We sought to characterize the methods of social regulation, levels of content processing, and types of reasoning processes used by students within their groups. Our analysis showed that groups engaged in predominantly three types of interactions when discussing chemistry content: co-construction, teaching, and tutoring. Although each group engaged in each of these types of interactions at some point, their prevalence varied between groups and group members. Our analysis suggests that the types of interactions that were most common depended on the relative content knowledge of the group members as well as on the difficulty of the tasks in which they were engaged. Additionally, we were interested in characterizing the reasoning methods used by students within their study groups. We found that students used a combination of three content-relevant methods of reasoning: model-based reasoning, case-based reasoning, or rule-based reasoning, in conjunction with one chemically-irrelevant method of reasoning: symbol-based reasoning. The most common way for groups to reason was to use rules, whereas the least common way was for students to work from a model. In general, student reasoning correlated strongly to the subject matter to which students were paying attention, and was only weakly related to student interactions. Overall, results from this study may help instructors to construct appropriate tasks to guide what and how students study outside of the classroom. We found that students had a decidedly strategic approach in their study groups, relying heavily on material provided by their instructors, and using the reasoning strategies that resulted in the lowest levels of content processing. We suggest that instructors create more opportunities for students to explore model-based reasoning, and to create opportunities for students to be able to co-construct in a collaborative manner within the context of their organic chemistry course.
Ghosh, Sujit K
2010-01-01
Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacvarov, D.C.
1981-01-01
A new method for probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes is presented. The utilized approach of applying the finite element method for probabilistic risk assessment is demonstrated to be very powerful. The reasons for this are two. First, the finite element method is inherently suitable for analysis of three dimensional spaces where the parameters, such as three variate probability densities of the lightning currents, are non-uniformly distributed. Second, the finite element method permits non-uniform discretization of the three dimensional probability spaces thus yielding high accuracy in critical regions, such as the area of themore » low probability events, while at the same time maintaining coarse discretization in the non-critical areas to keep the number of grid points and the size of the problem to a manageable low level. The finite element probabilistic risk assessment method presented here is based on a new multidimensional search algorithm. It utilizes an efficient iterative technique for finite element interpolation of the transmission line insulation flashover criteria computed with an electro-magnetic transients program. Compared to other available methods the new finite element probabilistic risk assessment method is significantly more accurate and approximately two orders of magnitude computationally more efficient. The method is especially suited for accurate assessment of rare, very low probability events.« less
On Measuring Quantitative Interpretations of Reasonable Doubt
ERIC Educational Resources Information Center
Dhami, Mandeep K.
2008-01-01
Beyond reasonable doubt represents a probability value that acts as the criterion for conviction in criminal trials. I introduce the membership function (MF) method as a new tool for measuring quantitative interpretations of reasonable doubt. Experiment 1 demonstrated that three different methods (i.e., direct rating, decision theory based, and…
Low-Energy Sputtering Research
NASA Technical Reports Server (NTRS)
Ray, P. K.; Shutthanandan, V.
1999-01-01
An experimental study is described to measure low-energy (less than 600 eV) sputtering yields of molybdenum with xenon ions using Rutherford backscattering spectroscopy (RBS) and secondary neutral mass spectroscopy (SNMS). An ion gun was used to generate the ion beam. The ion current density at the target surface was approximately 30 (micro)A/sq cm. For RBS measurements, the sputtered material was collected on a thin aluminum strip which was mounted on a semi-circular collector plate. The target was bombarded with 200 and 500 eV xenon ions at normal incidence. The differential sputtering yields were measured using the RBS method with 1 MeV helium ions. The differential yields were fitted with a cosine fitting function and integrated with respect to the solid angle to provide the total sputtering yields. The sputtering yields obtained using the RBS method are in reasonable agreement with those measured by other researchers using different techniques. For the SNMS measurements, 150 to 600 eV xenon ions were used at 50deg angle of incidence. The SNMS spectra were converted to sputtering yields for perpendicular incidence by normalizing SNMS spectral data at 500 eV with the yield measured by Rutherford backscattering spectrometry. Sputtering yields as well as the shape of the yield-energy curve obtained in this manner are in reasonable agreement with those measured by other researchers using different techniques. Sputtering yields calculated by using two semi-spherical formulations agree reasonably well with measured data. The isotopic composition of secondary ions were measured by bombarding copper with xenon ions at energies ranging from 100 eV to 1.5 keV. The secondary ion flux was found to be enriched in heavy isotopes at low incident ion energies. The heavy isotope enrichment was observed to decrease with increasing impact energy. Beyond 700 eV, light isotopes were sputtered preferentially with the enrichment remaining nearly constant.
True or false: do 5-year-olds understand belief?
Fabricius, William V; Boyer, Ty W; Weimer, Amy A; Carroll, Kathleen
2010-11-01
In 3 studies (N = 188) we tested the hypothesis that children use a perceptual access approach to reason about mental states before they understand beliefs. The perceptual access hypothesis predicts a U-shaped developmental pattern of performance in true belief tasks, in which 3-year-olds who reason about reality should succeed, 4- to 5-year-olds who use perceptual access reasoning should fail, and older children who use belief reasoning should succeed. The results of Study 1 revealed the predicted pattern in 2 different true belief tasks. The results of Study 2 disconfirmed several alternate explanations based on possible pragmatic and inhibitory demands of the true belief tasks. In Study 3, we compared 2 methods of classifying individuals according to which 1 of the 3 reasoning strategies (reality reasoning, perceptual access reasoning, belief reasoning) they used. The 2 methods gave converging results. Both methods indicated that the majority of children used the same approach across tasks and that it was not until after 6 years of age that most children reasoned about beliefs. We conclude that because most prior studies have failed to detect young children's use of perceptual access reasoning, they have overestimated their understanding of false beliefs. We outline several theoretical implications that follow from the perceptual access hypothesis.
Fast Image Texture Classification Using Decision Trees
NASA Technical Reports Server (NTRS)
Thompson, David R.
2011-01-01
Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.
Hayashi, N; Aso, H; Higashida, M; Kinoshita, H; Ohdo, S; Yukawa, E; Higuchi, S
2001-05-01
The clearance of recombinant human granulocyte-colony stimulating factor (rhG-CSF) is known to decrease with dose increase, and to be saturable. The average clearance after intravenous administration will be lower than that after subcutaneous administration. Therefore, the apparent absolute bioavailability with subcutaneous administration calculated from the AUC ratio is expected to be an underestimate. The absorption pharmacokinetics after subcutaneous administration was examined using the results of the bioequivalency study between two rhG-CSF formulations with a dose of 2 microg/kg. The analysis was performed using a modified Wagner-Nelson method with the nonlinear elimination model. The apparent absolute bioavailability for subcutaneous administration was 56.9 and 67.5% for each formulation, and the ratio between them was approximately 120%. The true absolute bioavailability was, however, estimated to be 89.8 and 96.9%, respectively, and the ratio was approximately 108%. The absorption pattern was applied to other doses, and the predicted clearance values for subcutaneous and intravenous administrations were then similar to the values for several doses reported in the literature. The underestimation of bioavailability was around 30%, and the amplification of difference was 2.5 times, from 8 to 20%, because of the nonlinear pharmacokinetics. The neutrophil increases for each formulation were identical, despite the different bioavailabilities. The reason for this is probably that the amount eliminated through the saturable process, which might indicate the amount consumed by the G-CSF receptor, was identical for each formulation.
Brain Imaging, Forward Inference, and Theories of Reasoning
Heit, Evan
2015-01-01
This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities. PMID:25620926
Brain imaging, forward inference, and theories of reasoning.
Heit, Evan
2014-01-01
This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities.
Reasons for Cigarillo Initiation and Cigarillo Manipulation Methods among Adolescents.
Kong, Grace; Bold, Krysten W; Simon, Patricia; Camenga, Deepa R; Cavallo, Dana A; Krishnan-Sarin, Suchitra
2017-04-01
To understand reasons for cigarillo initiation and cigarillo manipulation methods among adolescents. We conducted surveys in 8 Connecticut high schools to assess reasons for trying a cigarillo and cigarillo manipulation methods. We used multivariable logistic regressions to assess associations with demographics and tobacco use status. Among ever cigarillo users (N = 697, 33.6% girls, 16.7 years old [SD = 1.14], 62.1% White), top reasons for trying a cigarillo were curiosity (41.9%), appealing flavors (32.9%), because "friends use it" (25.3%), and low cost (22.4%). Overall, 40.3% of ever cigarillo users added marijuana (to create blunts) and 39.2% did not manipulate the product. Endorsement of these reasons for initiation and manipulation methods differed significantly across sex, age, SES and other tobacco use. Cigarillo regulations should include restricting all appealing flavors, increasing the cost, monitoring the restriction of sales of cigarillos to minors, and decreasing the appeal of cigarillo manipulation.
Dependence of Coulomb Sum Rule on the Short Range Correlation by Using Av18 Potential
NASA Astrophysics Data System (ADS)
Modarres, M.; Moeini, H.; Moshfegh, H. R.
The Coulomb sum rule (CSR) and structure factor are calculated for inelastic electron scattering from nuclear matter at zero and finite temperature in the nonrelativistic limit. The effect of short-range correlation (SRC) is presented by using lowest order constrained variational (LOCV) method and the Argonne Av18 and Δ-Reid soft-core potentials. The effects of different potentials as well as temperature are investigated. It is found that the nonrelativistic version of Bjorken scaling approximately sets in at the momentum transfer of about 1.1 to 1.2 GeV/c and the increase of temperature makes it to decrease. While different potentials do not significantly change CSR, the SRC improves the Coulomb sum rule and we get reasonably close results to both experimental data and others theoretical predictions.
Werner, Joel Benjamin
2008-01-01
Objectives To assess whether audio taping simulated patient interactions can improve the reliability of manually documented data and result in more accurate assessments. Methods Over a 3-month period, 1340 simulated patient visits were made to community pharmacies. Following the encounters, data gathered by the simulated patient were relayed to a coordinator who completed a rating form. Data recorded on the forms were later compared to an audiotape of the interaction. Corrections were tallied and reasons for making them were coded. Results Approximately 10% of cases required corrections, resulting in a 10%-20% modification in the pharmacy's total score. The difference between postcorrection and precorrection scores was significant. Conclusions Audio taping simulated patient visits enhances data integrity. Most corrections were required because of the simulated patients' poor recall abilities. PMID:19325956
NASA Astrophysics Data System (ADS)
Valkunde, Amol T.; Vhanmore, Bandopant D.; Urunkar, Trupti U.; Gavade, Kusum M.; Patil, Sandip D.; Takale, Mansing V.
2018-05-01
In this work, nonlinear aspects of a high intensity q-Gaussian laser beam propagating in collisionless plasma having upward density ramp of exponential profiles is studied. We have employed the nonlinearity in dielectric function of plasma by considering ponderomotive nonlinearity. The differential equation governing the dimensionless beam width parameter is achieved by using Wentzel-Kramers-Brillouin (WKB) and paraxial approximations and solved it numerically by using Runge-Kutta fourth order method. Effect of exponential density ramp profile on self-focusing of q-Gaussian laser beam for various values of q is systematically carried out and compared with results Gaussian laser beam propagating in collisionless plasma having uniform density. It is found that exponential plasma density ramp causes the laser beam to become more focused and gives reasonably interesting results.
Low-order modeling of internal heat transfer in biomass particle pyrolysis
Wiggins, Gavin M.; Daw, C. Stuart; Ciesielski, Peter N.
2016-05-11
We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. Here, we conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulatemore » biomass particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less
A non-differential elastomer curvature sensor for softer-than-skin electronics
NASA Astrophysics Data System (ADS)
Majidi, C.; Kramer, R.; Wood, R. J.
2011-10-01
We extend soft lithography microfabrication and design methods to introduce curvature sensors that are elastically soft (modulus 0.1-1 MPa) and stretchable (100-1000% strain). In contrast to existing curvature sensors that measure differential strain, sensors in this new class measure curvature directly and allow for arbitrary gauge factor and film thickness. Moreover, each sensor is composed entirely of a soft elastomer (PDMS (polydimethylsiloxane) or Ecoflex®) and conductive liquid (eutectic gallium indium, eGaIn) and thus remains functional even when stretched to several times its natural length. The electrical resistance in the embedded eGaIn microchannel is measured as a function of the bending curvature for a variety of sensor designs. In all cases, the experimental measurements are in reasonable agreement with closed-form algebraic approximations derived from elastic plate theory and Ohm's law.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wereszczak, Andrew A.; Emily Cousineau, J.; Bennion, Kevin
The apparent thermal conductivity of packed copper wire test specimens was measured parallel and perpendicular to the axis of the wire using laser flash, transient plane source, and transmittance test methods. Approximately 50% wire packing efficiency was produced in the specimens using either 670- or 925-μm-diameter copper wires that both had an insulation coating thickness of 37 μm. The interstices were filled with a conventional varnish material and also contained some remnant porosity. The apparent thermal conductivity perpendicular to the wire axis was about 0.5–1 W/mK, whereas it was over 200 W/mK in the parallel direction. The Kanzaki model andmore » an finite element analysis (FEA) model were found to reasonably predict the apparent thermal conductivity perpendicular to the wires but thermal conductivity percolation from nonideal wire-packing may result in their underestimation of it.« less
Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory
Liu, C.; Liu, J.; Yao, Y. X.; ...
2017-01-16
Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less
Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Liu, J.; Yao, Y. X.
Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less
The use of human milk and breastfeeding in premature infants.
Schanler, R J; Hurst, N M; Lau, C
1999-06-01
Human milk is beneficial in the management of premature infants. The beneficial effects generally relate to improvements in host defenses, digestion, and absorption of nutrients, gastrointestinal function, neurodevelopment, and maternal psychological well-being. The use of fortified human milk generally provides the premature infant adequate growth, nutrient retention, and biochemical indices of nutritional status when fed at approximately 180 mL/kg/day compared with unfortified human milk. Human milk can only support the needs of the premature infant if adequate milk volumes are produced. Intensive efforts at lactation support are desirable. Therefore, neonatal centers should encourage the feeding of fortified human milk for premature infants along with skin-to-skin contact as a reasonable method to enhance milk production and promote success with early breastfeeding, while potentially facilitating the development of an enteromammary response.
Low-Order Modeling of Internal Heat Transfer in Biomass Particle Pyrolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiggins, Gavin M.; Ciesielski, Peter N.; Daw, C. Stuart
2016-06-16
We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. We conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulate biomassmore » particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less
Can we use the equivalent sphere model to approximate organ doses in space radiation environments?
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei
For space radiation protection one often calculates the dose or dose equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to approximate the BFO dose. However, previous studies have concluded that a 5cm sphere gives a very different dose from the exact BFO dose. One study concludes that a 9cm sphere is a reasonable approximation for the BFO dose in solar particle event (SPE) environments. In this study we investigate the reason behind these observations and extend earlier studies by studying whether BFO, eyes or the skin can be approximated by the equivalent sphere model in different space radiation environments such as solar particle events and galactic cosmic ray (GCR) environments. We take the thickness distribution functions of the organs from the CAM (Computerized Anatomical Man) model, then use a deterministic radiation transport to calculate organ doses in different space radiation environments. The organ doses have been evaluated with a water or aluminum shielding from 0 to 20 g/cm2. We then compare these exact doses with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we propose to use a modified equivalent sphere model with two radius parameters to represent the skin or eyes. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for eyes or the skin. For galactic cosmic rays environments, the equivalent sphere model with one organ-specific radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of eyes or the skin, but is unacceptable for the dose of eyes or the skin. The BFO radius parameters are found to be significantly larger than 5 cm in all cases, consistent with the conclusion of an earlier study. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and 11 cm for the BFO, 3.7 to 4.8 cm for eyes, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose. In the proposed modified equivalent sphere model, the range of each of the two radius parameters for the skin (or eyes) is much tighter than that in the equivalent sphere model with one radius parameter. Our results thus show that the equivalent sphere model works better in galactic cosmic rays environments than in solar particle events. The model works well or marginally well for BFO but usually does not work for eyes or the skin. A modified model with two radius parameters works much better in approximating the dose and dose equivalent in eyes or the skin.
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-01-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
Mones, Letif; Jones, Andrew; Götz, Andreas W.; ...
2015-02-03
We present the implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis usingmore » various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies.« less
An estimation of distribution method for infrared target detection based on Copulas
NASA Astrophysics Data System (ADS)
Wang, Shuo; Zhang, Yiqun
2015-10-01
Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
Mones, Letif; Jones, Andrew; Götz, Andreas W; Laino, Teodoro; Walker, Ross C; Leimkuhler, Ben; Csányi, Gábor; Bernstein, Noam
2015-01-01
The implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER are presented. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis using various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:25649827
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mones, Letif; Jones, Andrew; Götz, Andreas W.
We present the implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis usingmore » various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies.« less
Palena, Celina; Bahamondes, M Valeria; Schenk, Verónica; Bahamondes, Luis; Fernandez-Funes, Julio
2009-01-01
Background Although Argentina has a new law on Reproductive Health, many barriers continue to exist regarding provision of contraceptive methods at public healthcare facilities. Methods We asked 212 pregnant women selected at random at the Maternity and Neonatal Hospital, Córdoba, Argentina, to participate in our descriptive study. Women were asked to complete a structured questionnaire. The objectives were to determine the rate of unintended pregnancies, reasons for not using contraception, past history of contraceptive use, and intended future use. Results Two hundred women responded to the questionnaire. Forty percent of the women stated that they had never used contraception and pregnancy was declared unintended by 65%. In the unintended pregnancy group, almost 50% of women said that they had not been using a contraceptive method because they were "unaware about contraception", and 25% stated that their contraceptive method had failed. Almost 85% of women stated that they intended to use contraception after delivery. Conclusion Approximately two-thirds of all pregnancies in this sample were unintended. Although the data is limited by the small sample size, our findings suggest that our government needs to invest in counseling and in improving the availability and access to contraceptive methods. PMID:19619304
Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 2
NASA Technical Reports Server (NTRS)
Culbert, Christopher J. (Editor)
1993-01-01
Papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake, held 1-3 Jun. 1992 at the Lyndon B. Johnson Space Center in Houston, Texas are included. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making.
NASA Astrophysics Data System (ADS)
Kuhn, William F.
At the core of what it means to be a scientist or engineer is the ability to think rationally using scientific reasoning methods. Yet, typically if asked, scientist and engineers are hard press for a reply what that means. Some may argue that the meaning of scientific reasoning methods is a topic for the philosophers and psychologist, but this study believes and will prove that the answers lie with the scientists and engineers, for who really know the workings of the scientific reasoning thought process than they. This study will provide evidence to the aims: (a) determine the fundamental characteristics of cognitive reasoning methods exhibited by engineer/scientists working in R&D projects, (b) sample the engineer/scientist community to determine their views as to the importance, frequency, and ranking of each of characteristics towards benefiting their R&D projects, (c) make concluding remarks regarding any identified competency gaps in the exhibited or expected cognitive reasoning methods of engineer/scientists working on R&D projects. To drive these aims are the following three research questions. The first, what are the salient characteristics of cognitive reasoning methods exhibited by engineer/scientists in an R&D environment? The second, what do engineer/scientists consider to be the frequency and importance of the salient cognitive reasoning methods characteristics? And the third, to what extent, if at all, do patent holders and technical fellows differ with regard to their perceptions of the importance and frequency of the salient cognitive reasoning characteristics of engineer/scientists? The methodology and empirical approach utilized and described: (a) literature search, (b) Delphi technique composed of seven highly distinguish engineer/scientists, (c) survey instrument directed to distinguish Technical Fellowship, (d) data collection analysis. The results provide by Delphi Team answered the first research question. The collaborative effort validated presented characteristic and most importantly presents ten additional novel or new reasoning characteristics. These characteristics were then presented and evaluated by the Technical Fellows. Their findings answered the second and third research question. With interesting results including the data indicating "imagination" as highest in importance and frequency, and comparison analysis of the patent holders showing those having five or more patents significantly valued "intuition (independent).
Diagnosis: Reasoning from first principles and experiential knowledge
NASA Technical Reports Server (NTRS)
Williams, Linda J. F.; Lawler, Dennis G.
1987-01-01
Completeness, efficiency and autonomy are requirements for suture diagnostic reasoning systems. Methods for automating diagnostic reasoning systems include diagnosis from first principles (i.e., reasoning from a thorough description of structure and behavior) and diagnosis from experiential knowledge (i.e., reasoning from a set of examples obtained from experts). However, implementation of either as a single reasoning method fails to meet these requirements. The approach of combining reasoning from first principles and reasoning from experiential knowledge does address the requirements discussed above and can possibly ease some of the difficulties associated with knowledge acquisition by allowing developers to systematically enumerate a portion of the knowledge necessary to build the diagnosis program. The ability to enumerate knowledge systematically facilitates defining the program's scope, completeness, and competence and assists in bounding, controlling, and guiding the knowledge acquisition process.
Reasons and correlates of contraceptive discontinuation in Kuwait.
Shah, N M; Shah, M A; Chowdhury, R I; Menon, I
2007-09-01
(1) To examine the probability of discontinuation of various methods within 1, 2, and three years of use and the reasons for discontinuation; 2) to analyse the socio-demographic correlates of discontinuation. Data from a survey of Kuwaiti women in reproductive ages conducted in 1999 were used. Information on duration of use of modern and traditional methods, and reasons for discontinuation during the 72 months before the survey were analysed. Probabilities of discontinuation were estimated through multiple decrement life table analysis. After 1 year, 30% of modern and 40% of traditional method users had discontinued; after 3 years, discontinuation increased to 66 and 70%, respectively. After 36 months, only 40% of IUD users discontinued compared with 74% of oral contraceptive users. The desire to become pregnant was the leading reason for discontinuation of most modern methods, while method failure was an equally important reason for traditional methods. Discontinuation was significantly more frequent among higher parity, non-working and Bedouin women, and among those who said Islam disapproves of contraception. Contraception is used largely for spacing. More than two-thirds of the women studied had discontinued most methods after three years, except the IUD, which was used only by about 10% of them. Traditional methods are often discontinued due to method failure and may result in an unintended pregnancy. Better counselling is warranted for traditional methods. Health care for managing side effects of modern methods also needs improvement.
A comparison of transport algorithms for premixed, laminar steady state flames
NASA Technical Reports Server (NTRS)
Coffee, T. P.; Heimerl, J. M.
1980-01-01
The effects of different methods of approximating multispecies transport phenomena in models of premixed, laminar, steady state flames were studied. Five approximation methods that span a wide range of computational complexity were developed. Identical data for individual species properties were used for each method. Each approximation method is employed in the numerical solution of a set of five H2-02-N2 flames. For each flame the computed species and temperature profiles, as well as the computed flame speeds, are found to be very nearly independent of the approximation method used. This does not indicate that transport phenomena are unimportant, but rather that the selection of the input values for the individual species transport properties is more important than the selection of the method used to approximate the multispecies transport. Based on these results, a sixth approximation method was developed that is computationally efficient and provides results extremely close to the most sophisticated and precise method used.
12 CFR 717.25 - Reasonable and simple methods of opting out.
Code of Federal Regulations, 2011 CFR
2011-01-01
... an Internet Web site, if the consumer agrees to the electronic delivery of information; (iv) Providing a toll-free telephone number that consumers may call to opt out; or (v) Allowing consumers to... single toll-free telephone number. (2) Opt-out methods that are not reasonable and simple. Reasonable and...
12 CFR 717.25 - Reasonable and simple methods of opting out.
Code of Federal Regulations, 2012 CFR
2012-01-01
... an Internet Web site, if the consumer agrees to the electronic delivery of information; (iv) Providing a toll-free telephone number that consumers may call to opt out; or (v) Allowing consumers to... single toll-free telephone number. (2) Opt-out methods that are not reasonable and simple. Reasonable and...
12 CFR 717.25 - Reasonable and simple methods of opting out.
Code of Federal Regulations, 2013 CFR
2013-01-01
... an Internet Web site, if the consumer agrees to the electronic delivery of information; (iv) Providing a toll-free telephone number that consumers may call to opt out; or (v) Allowing consumers to... single toll-free telephone number. (2) Opt-out methods that are not reasonable and simple. Reasonable and...
12 CFR 717.25 - Reasonable and simple methods of opting out.
Code of Federal Regulations, 2014 CFR
2014-01-01
... an Internet Web site, if the consumer agrees to the electronic delivery of information; (iv) Providing a toll-free telephone number that consumers may call to opt out; or (v) Allowing consumers to... single toll-free telephone number. (2) Opt-out methods that are not reasonable and simple. Reasonable and...
12 CFR 717.25 - Reasonable and simple methods of opting out.
Code of Federal Regulations, 2010 CFR
2010-01-01
... an Internet Web site, if the consumer agrees to the electronic delivery of information; (iv) Providing a toll-free telephone number that consumers may call to opt out; or (v) Allowing consumers to... single toll-free telephone number. (2) Opt-out methods that are not reasonable and simple. Reasonable and...
Statistical distributions of ultra-low dose CT sinograms and their fundamental limits
NASA Astrophysics Data System (ADS)
Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream
Bayesian estimation of a source term of radiation release with approximately known nuclide ratios
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek
2016-04-01
We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410
Applied Counterfactual Reasoning
NASA Astrophysics Data System (ADS)
Hendrickson, Noel
This chapter addresses two goals: The development of a structured method to aid intelligence and security analysts in assessing counterfactuals, and forming a structured method to educate (future) analysts in counterfactual reasoning. In order to pursue these objectives, I offer here an analysis of the purposes, problems, parts, and principles of applied counterfactual reasoning. In particular, the ways in which antecedent scenarios are selected and the ways in which scenarios are developed constitute essential (albeit often neglected) aspects of counterfactual reasoning. Both must be addressed to apply counterfactual reasoning effectively. Naturally, further issues remain, but these should serve as a useful point of departure. They are the beginning of a path to more rigorous and relevant counterfactual reasoning in intelligence analysis and counterterrorism.
Reasons for discontinuation of reversible contraceptive methods by women with epilepsy.
Mandle, Hannah B; Cahill, Kaitlyn E; Fowler, Kristen M; Hauser, W Allen; Davis, Anne R; Herzog, Andrew G
2017-05-01
To report the reasons for discontinuation of contraceptive methods by women with epilepsy (WWE). These retrospective data come from a web-based survey regarding the contraceptive practices of 1,144 WWE in the community, ages 18-47 years. We determined the frequencies of contraceptive discontinuations and the reasons for discontinuation. We compared risk ratios for rates of discontinuation among contraceptive methods and categories. We used chi-square analysis to test the independence of discontinuation reasons among the various contraceptive methods and categories and when stratified by antiepileptic drug (AED) categories. Nine hundred fifty-nine of 2,393 (40.6%) individual, reversible contraceptive methods were discontinued. One-half (51.8%) of the WWE who discontinued a method discontinued at least two methods. Hormonal contraception was discontinued most often (553/1,091, 50.7%) with a risk ratio of 1.94 (1.54-2.45, p < 0.0001) compared to intrauterine devices (IUDs), the category that was discontinued the least (57/227, 25.1%). Among all individual methods, the contraceptive patch was stopped most often (79.7%) and the progestin-IUD was stopped the least (20.1%). The top three reasons for discontinuation among all methods were reliability concerns (13.9%), menstrual problems (13.5%), and increased seizures (8.6%). There were significant differences among discontinuation rates and reasons when stratified by AED category for hormonal contraception but not for any other contraceptive category. Contraception counseling for WWE should consider the special experience profiles that are unique to this special population on systemic hormonal contraception. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Analytical approximate solutions for a general class of nonlinear delay differential equations.
Căruntu, Bogdan; Bota, Constantin
2014-01-01
We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... maintained on the APHIS Web site at http://www.aphis.usda.gov/import_export/animals/animal_disease_status... approximately 210 minutes after which they must be cooked in hot oil (deep-fried) at a minimum of 104 °C for an...
Learning to Leverage Student Thinking: What Novice Approximations Teach Us about Ambitious Practice
ERIC Educational Resources Information Center
Singer-Gabella, Marcy; Stengel, Barbara; Shahan, Emily; Kim, Min-Joung
2016-01-01
Central to ambitious teaching is a constellation of practices we have come to call "leveraging student thinking." In leveraging, teachers position students' understanding and reasoning as a central means to drive learning forward. While leveraging typically is described as a feature of mature practice, in this article we examine…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-25
..., approximately 27 miles east-southeast of Green Bay, WI. Possible alternatives to the proposed action (license renewal) include no action and reasonable alternative energy sources. As discussed in Section 9.4 of the... NUCLEAR REGULATORY COMMISSION [Docket No. 50-305; NRC-2010-0041] Dominion Energy Kewaunee, Inc...
A Proposed Template for an Emergency Online School Professional Training Curriculum
ERIC Educational Resources Information Center
Rush, S. Craig; Wheeler, Joanna; Partridge, Ashley
2014-01-01
On average, natural disasters directly impact approximately 160 million individuals and cause 90,000 deaths each year. As natural disasters are becoming more familiar, it stands to reason that school personnel, particularly mental health professionals, need to know how to prepare for natural disasters. Current disaster preparation and response…
Predictors of Graduation of Readmitted "At Risk" College Students
ERIC Educational Resources Information Center
Berkovitz, Roslyn A.; O'Quin, Karen
2007-01-01
We conducted an archival study of at-risk students who had "stopped out" of college for many reasons (academic dismissal, financial problems, personal problems, etc.) and who later were accepted to return to school. Approximately 27% of the accepted students chose not to return. Those who returned had higher grade point averages, had completed…
A Compulsory Bioethics Module for a Large Final Year Undergraduate Class
ERIC Educational Resources Information Center
Pearce, Roger S.
2009-01-01
The article describes a compulsory bioethics module delivered to [approximately] 120 biology students in their final year. The main intended learning outcome is that students should be able to analyse and reason about bioethical issues. Interactive lectures explain and illustrate bioethics. Underlying principles and example issues are used to…
ERIC Educational Resources Information Center
Healey, Nigel M.
2017-01-01
For many universities around the world, internationalisation means the recruitment of fee-paying international students (so-called export education) for primarily commercial reasons. For UK universities, international (non-European Union) students account for approximately 13% of their annual revenues, making them highly dependent on international…
Metals Emissions from the Open Detonation Treatment of Energetic Wastes
2004-10-01
CPIA Publication 477, Vol. I, March 1988. p. 139. 12. Naval Air Warfare Center Weapons Division. "Fragment Breakup Testing of BLU-97 Bomblets with PBXN ...volume at the time the particulate sample was collected was approximately 106 m3. For unknown reasons, the Army did not convert the detonation plume
Free Fall and the Equivalence Principle Revisited
ERIC Educational Resources Information Center
Pendrill, Ann-Marie
2017-01-01
Free fall is commonly discussed as an example of the equivalence principle, in the context of a homogeneous gravitational field, which is a reasonable approximation for small test masses falling moderate distances. Newton's law of gravity provides a generalisation to larger distances, and also brings in an inhomogeneity in the gravitational field.…
ERIC Educational Resources Information Center
Jackson, Janice; Flamboe, Thomas C.
The annotated bibliography contains approximately 110 references (1969-1976) of articles related to the Sewall Early Education Developmental Program. Entries are arranged alphabetically by author within the following seven topic areas: social emotional, gross motor, fine motor, adaptive reasoning, speech and language, feeding and dressing and…
Mississippi Labor Mobility Demonstration Project--Relocating the Unemployed: Dimensions of Success.
ERIC Educational Resources Information Center
Speight, John F.; And Others
The document provides an analysis of relocation stability of individuals relocated during the March, 1970-November, 1971 contract period. Data bases were 1,244 applicants with screening information and 401 individuals with follow-up interview information. Approximately one half were in new areas six months after being relocated. Reasons for…
Teachers See What Ability Scores Cannot: Predicting Student Performance with Challenging Mathematics
ERIC Educational Resources Information Center
Foreman, Jennifer L.; Gubbins, E. Jean
2015-01-01
Teacher nominations of students are commonly used in gifted and talented identification systems to supplement psychometric measures of reasoning ability. In this study, second grade teachers were requested to nominate approximately one fourth of their students as having high learning potential in the year prior to the students' participation in a…
A Study of Vocational Education Programs in the Michigan Department of Corrections.
ERIC Educational Resources Information Center
Dirkx, John M.; Kielbaso, Gloria; Corley, Charles
Rapid expansion of the prison population in Michigan has created concern for consistency, continuity, and articulation within the Michigan Department of Corrections vocational programs, which serve approximately 1,800 prisoners at a time. For this reason, a study was undertaken to determine how vocational education within Michigan's prisons might…
Females' Reasons for Their Physical Aggression in Dating Relationships
ERIC Educational Resources Information Center
Hettrich, Emma L.; O'Leary, K. Daniel
2007-01-01
Approximately 32% of dating college females reported that they engaged in physical aggression against their partners and that they engaged in acts of physical aggression more often than their male partners engaged in aggression against them. However, the females also reported that their male partners attempted to force them to engage in oral sex…
Correlates of Smoke-Free Home Policies in Shanghai, China
Kegler, Michelle C.; Berg, Carla J.; Wang, Jing; Zhou, Xilan; Liu, Dong
2014-01-01
Background. Approximately 63.7% of nonsmokers in China are exposed to secondhand smoke (SHS) in their homes. The current study documents the prevalence and correlates of smoke-free home policies in Shanghai, as well as reasons for implementing such a policy and places where smoking is most commonly allowed. Methods. We conducted in-person surveys of 500 participants using a multistage proportional random sampling design in an urban and suburban district. Results. Overall, 35.3% had a smoke-free home policy. In the logistic regression, having higher income, not having smokers in the home, having children in the home, having fewer friends/relatives who permit smoking at home, and not being a current smoker were correlates of having a smoke-free home policy (P < 0.05). Concern about the health impact of SHS was reportedly the most important reason for establishing a smoke-free home. Among participants with no or partial bans, the most common places where smoking was allowed included the living room (64.2%), kitchen (46.1%), and bathroom (33.8%). Conclusions. Smoke-free home policies were in place for a minority of households surveyed. Establishing such a policy was influenced by personal smoking behavior and social factors. These findings suggest an urgent need to promote smoke-free home policies through tobacco control programs. PMID:25061606
Developments in the kinetic theories of ion and electron swarms in the 1960s and 70s
NASA Astrophysics Data System (ADS)
Skullerud, H. R.
2017-04-01
The two decades between 1960 to 1980 saw quite a fantastic development in diverse areas in physics, and so also in the quantitative theoretical treatment and deeper understanding of the behaviour of isolated electrons and ions in gases—that is ‘charged particle swarm physics’. The evolution in swarm theory was strongly correlated with the contemporary advances in computer technology and the emergence of new and accurate experimental methods for finding charged particle transport parameters, as drift velocities, diffusion coefficients and reaction rates, and also with developments in neighbouring fields as plasma physics and the physics of electronic and molecular collisions. In 1960, low energy electron behaviour could already be calculated with reasonable accuracy in the so-called two-term approximation, while ion behaviour could only be treated at weak electric fields. By 1980, reasonably complete theories had been developed for perhaps most cases in interest—which is reflected in a number of reviews, books and journal articles published in the early 1980s. We will present a journey through the developments in this period and the basic theories behind the Boltzmann equation and Maxwell’s transfer equations. We will also indicate how the interaction between different studies of the same basic processes have led to the elimination of shortcomings and a better understanding.
A decision method based on uncertainty reasoning of linguistic truth-valued concept lattice
NASA Astrophysics Data System (ADS)
Yang, Li; Xu, Yang
2010-04-01
Decision making with linguistic information is a research hotspot now. This paper begins by establishing the theory basis for linguistic information processing and constructs the linguistic truth-valued concept lattice for a decision information system, and further utilises uncertainty reasoning to make the decision. That is, we first utilise the linguistic truth-valued lattice implication algebra to unify the different kinds of linguistic expressions; second, we construct the linguistic truth-valued concept lattice and decision concept lattice according to the concrete decision information system and third, we establish the internal and external uncertainty reasoning methods and talk about the rationality of them. We apply these uncertainty reasoning methods into decision making and present some generation methods of decision rules. In the end, we give an application of this decision method by an example.
Khakzad, Nima; Khan, Faisal; Amyotte, Paul
2015-07-01
Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well-established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents' relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor-based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates. © 2015 Society for Risk Analysis.
Construction and application of a new dual-hybrid random phase approximation.
Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Kállay, Mihály
2015-10-13
The direct random phase approximation (dRPA) combined with Kohn-Sham reference orbitals is among the most promising tools in computational chemistry and applicable in many areas of chemistry and physics. The reason for this is that it scales as N(4) with the system size, which is a considerable advantage over the accurate ab initio wave function methods like standard coupled-cluster. dRPA also yields a considerably more accurate description of thermodynamic and electronic properties than standard density-functional theory methods. It is also able to describe strong static electron correlation effects even in large systems with a small or vanishing band gap missed by common single-reference methods. However, dRPA has several flaws due to its self-correlation error. In order to obtain accurate and precise reaction energies, barriers and noncovalent intra- and intermolecular interactions, we construct a new dual-hybrid dRPA (hybridization of exact and semilocal exchange in both the energy and the orbitals) and test the performance of this new functional on isogyric, isodesmic, hypohomodesmotic, homodesmotic, and hyperhomodesmotic reaction classes. We also use a test set of 14 Diels-Alder reactions, six atomization energies (AE6), 38 hydrocarbon atomization energies, and 100 reaction barrier heights (DBH24, HT-BH38, and NHT-BH38). For noncovalent complexes, we use the NCCE31 and S22 test sets. To test the intramolecular interactions, we use a set of alkane, cysteine, phenylalanine-glycine-glycine tripeptide, and monosaccharide conformers. We also discuss the delocalization and static correlation errors. We show that a universally accurate description of chemical properties can be provided by a large, 75% exact exchange mixing both in the calculation of the reference orbitals and the final energy.
The electron-furfural scattering dynamics for 63 energetically open electronic states
NASA Astrophysics Data System (ADS)
da Costa, Romarly F.; do N. Varella, Márcio T.; Bettega, Márcio H. F.; Neves, Rafael F. C.; Lopes, Maria Cristina A.; Blanco, Francisco; García, Gustavo; Jones, Darryl B.; Brunger, Michael J.; Lima, Marco A. P.
2016-03-01
We report on integral-, momentum transfer- and differential cross sections for elastic and electronically inelastic electron collisions with furfural (C5H4O2). The calculations were performed with two different theoretical methodologies, the Schwinger multichannel method with pseudopotentials (SMCPP) and the independent atom method with screening corrected additivity rule (IAM-SCAR) that now incorporates a further interference (I) term. The SMCPP with N energetically open electronic states (Nopen) at either the static-exchange (Nopen ch-SE) or the static-exchange-plus-polarisation (Nopen ch-SEP) approximation was employed to calculate the scattering amplitudes at impact energies lying between 5 eV and 50 eV, using a channel coupling scheme that ranges from the 1ch-SEP up to the 63ch-SE level of approximation depending on the energy considered. For elastic scattering, we found very good overall agreement at higher energies among our SMCPP cross sections, our IAM-SCAR+I cross sections and the experimental data for furan (a molecule that differs from furfural only by the substitution of a hydrogen atom in furan with an aldehyde functional group). This is a good indication that our elastic cross sections are converged with respect to the multichannel coupling effect for most of the investigated intermediate energies. However, although the present application represents the most sophisticated calculation performed with the SMCPP method thus far, the inelastic cross sections, even for the low lying energy states, are still not completely converged for intermediate and higher energies. We discuss possible reasons leading to this discrepancy and point out what further steps need to be undertaken in order to improve the agreement between the calculated and measured cross sections.
Verification of a three-dimensional resin transfer molding process simulation model
NASA Technical Reports Server (NTRS)
Fingerson, John C.; Loos, Alfred C.; Dexter, H. Benson
1995-01-01
Experimental evidence was obtained to complete the verification of the parameters needed for input to a three-dimensional finite element model simulating the resin flow and cure through an orthotropic fabric preform. The material characterizations completed include resin kinetics and viscosity models, as well as preform permeability and compaction models. The steady-state and advancing front permeability measurement methods are compared. The results indicate that both methods yield similar permeabilities for a plain weave, bi-axial fiberglass fabric. Also, a method to determine principal directions and permeabilities is discussed and results are shown for a multi-axial warp knit preform. The flow of resin through a blade-stiffened preform was modeled and experiments were completed to verify the results. The predicted inlet pressure was approximately 65% of the measured value. A parametric study was performed to explain differences in measured and predicted flow front advancement and inlet pressures. Furthermore, PR-500 epoxy resin/IM7 8HS carbon fabric flat panels were fabricated by the Resin Transfer Molding process. Tests were completed utilizing both perimeter injection and center-port injection as resin inlet boundary conditions. The mold was instrumented with FDEMS sensors, pressure transducers, and thermocouples to monitor the process conditions. Results include a comparison of predicted and measured inlet pressures and flow front position. For the perimeter injection case, the measured inlet pressure and flow front results compared well to the predicted results. The results of the center-port injection case showed that the predicted inlet pressure was approximately 50% of the measured inlet pressure. Also, measured flow front position data did not agree well with the predicted results. Possible reasons for error include fiber deformation at the resin inlet and a lag in FDEMS sensor wet-out due to low mold pressures.
The electron-furfural scattering dynamics for 63 energetically open electronic states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Romarly F. da; Centro de Ciências Naturais e Humanas, Universidade Federal do ABC, Santo André, São Paulo 09210-580; Varella, Márcio T. do N
We report on integral-, momentum transfer- and differential cross sections for elastic and electronically inelastic electron collisions with furfural (C{sub 5}H{sub 4}O{sub 2}). The calculations were performed with two different theoretical methodologies, the Schwinger multichannel method with pseudopotentials (SMCPP) and the independent atom method with screening corrected additivity rule (IAM-SCAR) that now incorporates a further interference (I) term. The SMCPP with N energetically open electronic states (N{sub open}) at either the static-exchange (N{sub open} ch-SE) or the static-exchange-plus-polarisation (N{sub open} ch-SEP) approximation was employed to calculate the scattering amplitudes at impact energies lying between 5 eV and 50 eV, using a channelmore » coupling scheme that ranges from the 1ch-SEP up to the 63ch-SE level of approximation depending on the energy considered. For elastic scattering, we found very good overall agreement at higher energies among our SMCPP cross sections, our IAM-SCAR+I cross sections and the experimental data for furan (a molecule that differs from furfural only by the substitution of a hydrogen atom in furan with an aldehyde functional group). This is a good indication that our elastic cross sections are converged with respect to the multichannel coupling effect for most of the investigated intermediate energies. However, although the present application represents the most sophisticated calculation performed with the SMCPP method thus far, the inelastic cross sections, even for the low lying energy states, are still not completely converged for intermediate and higher energies. We discuss possible reasons leading to this discrepancy and point out what further steps need to be undertaken in order to improve the agreement between the calculated and measured cross sections.« less
26 CFR 1.985-3 - United States dollar approximate separate transactions method.
Code of Federal Regulations, 2010 CFR
2010-04-01
... transactions method. 1.985-3 Section 1.985-3 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE... dollar approximate separate transactions method. (a) Scope and effective date—(1) Scope. This section describes the United States dollar (dollar) approximate separate transactions method of accounting (DASTM...
NASA Technical Reports Server (NTRS)
Haas, J. E.; Roelke, R. J.; Hermann, P.
1981-01-01
The reasons for the low aerodynamic performance of a 13.5 cm tip diameter aircraft engine starter turbine were investigated. Both the stator and the stage were evaluated. Approximately 10 percent improvement in turbine efficiency was obtained when the honeycomb shroud over the rotor blade tips was filled to obtain a solid shroud surface. Efficiency improvements were obtained for three rotor configurations when the shroud was filled. It is suggested that the large loss associated with the open honeycomb shroud is due primarily to energy loss associated with gas transportation as a result of the blade to blade pressure differential at the tip section.
Student Interpretations of Phylogenetic Trees in an Introductory Biology Course
Dees, Jonathan; Niemi, Jarad; Montplaisir, Lisa
2014-01-01
Phylogenetic trees are widely used visual representations in the biological sciences and the most important visual representations in evolutionary biology. Therefore, phylogenetic trees have also become an important component of biology education. We sought to characterize reasoning used by introductory biology students in interpreting taxa relatedness on phylogenetic trees, to measure the prevalence of correct taxa-relatedness interpretations, and to determine how student reasoning and correctness change in response to instruction and over time. Counting synapomorphies and nodes between taxa were the most common forms of incorrect reasoning, which presents a pedagogical dilemma concerning labeled synapomorphies on phylogenetic trees. Students also independently generated an alternative form of correct reasoning using monophyletic groups, the use of which decreased in popularity over time. Approximately half of all students were able to correctly interpret taxa relatedness on phylogenetic trees, and many memorized correct reasoning without understanding its application. Broad initial instruction that allowed students to generate inferences on their own contributed very little to phylogenetic tree understanding, while targeted instruction on evolutionary relationships improved understanding to some extent. Phylogenetic trees, which can directly affect student understanding of evolution, appear to offer introductory biology instructors a formidable pedagogical challenge. PMID:25452489
A hybrid Pade-Galerkin technique for differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1993-01-01
A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.
The Effect of Problem-Solving Video Games on the Science Reasoning Skills of College Students
NASA Astrophysics Data System (ADS)
Fanetti, Tina M.
As the world continues to rapidly change, students are faced with the need to develop flexible skills, such as science reasoning that will help them thrive in the new knowledge economy. Prensky (2001), Gee (2003), and Van Eck (2007) have all suggested that the way to engage learners and teach them the necessary skills is through digital games, but empirical studies focusing on popular games are scant. One way digital games, especially video games, could potentially be useful if there were a flexible and inexpensive method a student could use at their convenience to improve selected science reasoning skills. Problem-solving video games, which require the use of reasoning and problem solving to answer a variety of cognitive challenges could be a promising method to improve selected science reasoning skills. Using think-aloud protocols and interviews, a qualitative study was carried out with a small sample of college students to examine what impact two popular video games, Professor Layton and the Curious Village and Professor Layton and the Diabolical Box, had on specific science reasoning skills. The subject classified as an expert in both gaming and reasoning tended to use more higher order thinking and reasoning skills than the novice reasoners. Based on the assessments, the science reasoning of college students did not improve during the course of game play. Similar to earlier studies, students tended to use trial and error as their primary method of solving the various puzzles in the game and additionally did not recognize when to use the appropriate reasoning skill to solve a puzzle, such as proportional reasoning.
Garcia-Cantero, Juan J.; Brito, Juan P.; Mata, Susana; Bayona, Sofia; Pastor, Luis
2017-01-01
Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells’ overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma’s morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes. PMID:28690511
Management of a stage-structured insect pest: an application of approximate optimization.
Hackett, Sean C; Bonsall, Michael B
2018-06-01
Ecological decision problems frequently require the optimization of a sequence of actions over time where actions may have both immediate and downstream effects. Dynamic programming can solve such problems only if the dimensionality is sufficiently low. Approximate dynamic programming (ADP) provides a suite of methods applicable to problems of arbitrary complexity at the expense of guaranteed optimality. The most easily generalized method is the look-ahead policy: a brute-force algorithm that identifies reasonable actions by constructing and solving a series of temporally truncated approximations of the full problem over a defined planning horizon. We develop and apply this approach to a pest management problem inspired by the Mediterranean fruit fly, Ceratitis capitata. The model aims to minimize the cumulative costs of management actions and medfly-induced losses over a single 16-week season. The medfly population is stage-structured and grows continuously while management decisions are made at discrete, weekly intervals. For each week, the model chooses between inaction, insecticide application, or one of six sterile insect release ratios. Look-ahead policy performance is evaluated over a range of planning horizons, two levels of crop susceptibility to medfly and three levels of pesticide persistence. In all cases, the actions proposed by the look-ahead policy are contrasted to those of a myopic policy that minimizes costs over only the current week. We find that look-ahead policies always out-performed a myopic policy and decision quality is sensitive to the temporal distribution of costs relative to the planning horizon: it is beneficial to extend the planning horizon when it excludes pertinent costs. However, longer planning horizons may reduce decision quality when major costs are resolved imminently. ADP methods such as the look-ahead-policy-based approach developed here render questions intractable to dynamic programming amenable to inference but should be applied carefully as their flexibility comes at the expense of guaranteed optimality. However, given the complexity of many ecological management problems, the capacity to propose a strategy that is "good enough" using a more representative problem formulation may be preferable to an optimal strategy derived from a simplified model. © 2018 by the Ecological Society of America.
Neural dynamic programming and its application to control systems
NASA Astrophysics Data System (ADS)
Seong, Chang-Yun
There are few general practical feedback control methods for nonlinear MIMO (multi-input-multi-output) systems, although such methods exist for their linear counterparts. Neural Dynamic Programming (NDP) is proposed as a practical design method of optimal feedback controllers for nonlinear MIMO systems. NDP is an offspring of both neural networks and optimal control theory. In optimal control theory, the optimal solution to any nonlinear MIMO control problem may be obtained from the Hamilton-Jacobi-Bellman equation (HJB) or the Euler-Lagrange equations (EL). The two sets of equations provide the same solution in different forms: EL leads to a sequence of optimal control vectors, called Feedforward Optimal Control (FOC); HJB yields a nonlinear optimal feedback controller, called Dynamic Programming (DP). DP produces an optimal solution that can reject disturbances and uncertainties as a result of feedback. Unfortunately, computation and storage requirements associated with DP solutions can be problematic, especially for high-order nonlinear systems. This dissertation presents an approximate technique for solving the DP problem based on neural network techniques that provides many of the performance benefits (e.g., optimality and feedback) of DP and benefits from the numerical properties of neural networks. We formulate neural networks to approximate optimal feedback solutions whose existence DP justifies. We show the conditions under which NDP closely approximates the optimal solution. Finally, we introduce the learning operator characterizing the learning process of the neural network in searching the optimal solution. The analysis of the learning operator provides not only a fundamental understanding of the learning process in neural networks but also useful guidelines for selecting the number of weights of the neural network. As a result, NDP finds---with a reasonable amount of computation and storage---the optimal feedback solutions to nonlinear MIMO control problems that would be very difficult to solve with DP. NDP was demonstrated on several applications such as the lateral autopilot logic for a Boeing 747, the minimum fuel control of a double-integrator plant with bounded control, the backward steering of a two-trailer truck, and the set-point control of a two-link robot arm.
Two-dimensional grid-free compressive beamforming.
Yang, Yang; Chu, Zhigang; Xu, Zhongming; Ping, Guoli
2017-08-01
Compressive beamforming realizes the direction-of-arrival (DOA) estimation and strength quantification of acoustic sources by solving an underdetermined system of equations relating microphone pressures to a source distribution via compressive sensing. The conventional method assumes DOAs of sources to lie on a grid. Its performance degrades due to basis mismatch when the assumption is not satisfied. To overcome this limitation for the measurement with plane microphone arrays, a two-dimensional grid-free compressive beamforming is developed. First, a continuum based atomic norm minimization is defined to denoise the measured pressure and thus obtain the pressure from sources. Next, a positive semidefinite programming is formulated to approximate the atomic norm minimization. Subsequently, a reasonably fast algorithm based on alternating direction method of multipliers is presented to solve the positive semidefinite programming. Finally, the matrix enhancement and matrix pencil method is introduced to process the obtained pressure and reconstruct the source distribution. Both simulations and experiments demonstrate that under certain conditions, the grid-free compressive beamforming can provide high-resolution and low-contamination imaging, allowing accurate and fast estimation of two-dimensional DOAs and quantification of source strengths, even with non-uniform arrays and noisy measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Razafinjanahary, H.; Rogemond, F.; Chermette, H.
The MS-LSD method remains a method of interest when rapidity and small computer resources are required; its main drawback is some lack of accuracy, mainly due to the muffin-tin distribution of the potential. In the case of large clusters or molecules, the use of an empty sphere to fill, in part, the large intersphere region can improve greatly the results. Calculations bearing on C{sub 60} has been undertaken to underline this trend, because, on the one hand, the fullerenes exhibit a remarkable possibility to fit a large empty sphere in the center of the cluster and, on the other hand,more » numerous accurate calculations have already been published, allowing quantitative comparison with results. The author`s calculations suggest that in case of added empty sphere the results compare well with the results of more accurate calculations. The calculated electron affinity for C{sub 60} and C{sub 60}{sup {minus}} are in reasonable agreement with experimental values, but the stability of C{sub 60}{sup 2-} in gas phase is not found. 35 refs., 3 figs., 5 tabs.« less
Cold dark matter. 1: The formation of dark halos
NASA Technical Reports Server (NTRS)
Gelb, James M.; Bertschinger, Edmund
1994-01-01
We use numerical simulations of critically closed cold dark matter (CDM) models to study the effects of numerical resolution on observable quantities. We study simulations with up to 256(exp 3) particles using the particle-mesh (PM) method and with up to 144(exp 3) particles using the adaptive particle-particle-mesh (P3M) method. Comparisons of galaxy halo distributions are made among the various simulations. We also compare distributions with observations, and we explore methods for identifying halos, including a new algorithm that finds all particles within closed contours of the smoothed density field surrounding a peak. The simulated halos show more substructure than predicted by the Press-Schechter theory. We are able to rule out all omega = 1 CDM models for linear amplitude sigma(sub 8) greater than or approximately = 0.5 because the simulations produce too many massive halos compared with the observations. The simulations also produce too many low-mass halos. The distribution of halos characterized by their circular velocities for the P3M simulations is in reasonable agreement with the observations for 150 km/s less than or = V(sub circ) less than or = 350 km/s.
Models for Train Passenger Forecasting of Java and Sumatra
NASA Astrophysics Data System (ADS)
Sartono
2017-04-01
People tend to take public transportation to avoid high traffic, especially in Java. In Jakarta, the number of railway passengers is over than the capacity of the train at peak time. This is an opportunity as well as a challenge. If it is managed well then the company can get high profit. Otherwise, it may lead to disaster. This article discusses models for the train passengers, hence, finding the reasonable models to make a prediction overtimes. The Box-Jenkins method is occupied to develop a basic model. Then, this model is compared to models obtained using exponential smoothing method and regression method. The result shows that Holt-Winters model is better to predict for one-month, three-month, and six-month ahead for the passenger in Java. In addition, SARIMA(1,1,0)(2,0,0) is more accurate for nine-month and twelve-month oversee. On the other hand, for Sumatra passenger forecasting, SARIMA(1,1,1)(0,0,2) gives a better approximation for one-month ahead, and ARIMA model is best for three-month ahead prediction. The rest, Trend Seasonal and Liner Model has the least of RMSE to forecast for six-month, nine-month, and twelve-month ahead.
Loomis, John H.; Richardson, Leslie; Kroeger, Timm; Casey, Frank
2014-01-01
Ecosystem goods and services are now widely recognized as the benefits that humans derive from the natural environment around them including abiotic (e.g. atmosphere) and biotic components. The work by Costanza et al. (1997) to value the world’s ecosystem services brought the concept of ecosystem service valuation to the attention of the world press and environmental economists working in the area of non-market valuation. The article’s US$33 trillion estimate of these services, despite world GDP being only US$18 trillion, was definitely headline grabbing. This ambitious effort was undertaken with reliance on transferring existing values per unit from other (often site specific) valuation studies. Benefit transfer (see Boyle and Bergstrom, 1992; Rosenberger and Loomis, 2000, 2001) involves transfers of values per unit from an area that has been valued using primary valuation methods such as contingent valuation, travel cost or hedonic property methods (Champ et al., 2003) to areas for which values are needed. Benefit transfer often provides a reasonable approximation of the benefit of unstudied ecosystem services based on transfer of benefits estimates per unit (per visitor day, per acre) from existing studies. An appropriate benefit transfer should be performed on the same spatial scale of analysis (e.g. reservoir to reservoir, city to city) as the original study. However, the reasonableness of benefit transfer may be strained when applying locally derived per acre values from studies of several thousand acres of a resource such as wetlands to hundreds of millions of acres of wetlands.
More on the alleged 1970 geomagnetic jerk
Alldredge, L.R.
1985-01-01
French and United Kingdom workers have published reports describing a sudden change in the secular acceleration, called an impulse or a jerk, which took place around 1970. They claim that this change took place in a period of a year or two and that the sources of the alleged jerk are internal. An earlier paper by this author questioned their method of analysis pointing out that their method of piecemeal fitting of parabolas to the data will always create a discontinuity in the secular acceleration where the parabolas join and that the place where the parabolas join is an a priori assumption and not a result of the analysis. This paper gives a very brief summary of this first paper and then adds additional reasons for questioning the allegation that there was a worldwide sudden jerk in the magnetic field of internal origin around 1970. These new reasons are based largely on new field models which give cubic approximations of the field right through the 1970 timeframe and therefore have no discontinuities in the second derivative (jerk) around 1970. Some recent Japanese work shows several sudden changes in the secular variation pattern which cover limited areas and do not seem to be closely related to each other or to the irregularity noted in the European area near 1970. The secular variation picture which seems to be emerging is one with many local or limited-regional secular variation changes which appear to be almost unrelated to each other in time or space. A worldwide spherical harmonic model including coefficients up to degree 13 could never properly depict such a situation. ?? 1985.
Women's reasons for choosing abortion method: A systematic literature review.
Kanstrup, Charlotte; Mäkelä, Marjukka; Hauskov Graungaard, Anette
2017-07-01
We aim to describe and classify reasons behind women's choice between medical and surgical abortion. A systematic literature review was conducted in PubMed and PsycINFO in October 2015. The subjects were women in early pregnancy opting for abortion at clinics or hospitals in high-income countries. We extracted women's reasons for choice of abortion method and analysed these qualitatively, looking at main reasons for choosing either medical or surgical abortion. Reasons for choice of method were classified to five main groups: technical nature of the intervention, fear of complications, fear of surgery or anaesthesia, timing and sedation. Reasons for selecting medical abortion were often based on the perception of the method being 'more natural' and the wish to have abortion in one's home in addition to fear of complications. Women who opted for surgical abortion appreciated the quicker process, viewed it as the safer option, and wished to avoid pain and excess bleeding. Reasons were often based on emotional reactions, previous experiences and a lack of knowledge about the procedures. Some topics such as pain or excess bleeding received little attention. Overall the quality of the studies was low, most studies were published more than 10 years ago, and the generalisability of the findings was poor. Women did not base their choice of abortion method only on rational information from professionals but also on emotions and especially fears. Support techniques for a more informed choice are needed. Recent high-quality studies in this area are lacking.
Meta-analysis of two studies in the presence of heterogeneity with applications in rare diseases.
Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat
2017-07-01
Random-effects meta-analyses are used to combine evidence of treatment effects from multiple studies. Since treatment effects may vary across trials due to differences in study characteristics, heterogeneity in treatment effects between studies must be accounted for to achieve valid inference. The standard model for random-effects meta-analysis assumes approximately normal effect estimates and a normal random-effects model. However, standard methods based on this model ignore the uncertainty in estimating the between-trial heterogeneity. In the special setting of only two studies and in the presence of heterogeneity, we investigate here alternatives such as the Hartung-Knapp-Sidik-Jonkman method (HKSJ), the modified Knapp-Hartung method (mKH, a variation of the HKSJ method) and Bayesian random-effects meta-analyses with priors covering plausible heterogeneity values; R code to reproduce the examples is presented in an appendix. The properties of these methods are assessed by applying them to five examples from various rare diseases and by a simulation study. Whereas the standard method based on normal quantiles has poor coverage, the HKSJ and mKH generally lead to very long, and therefore inconclusive, confidence intervals. The Bayesian intervals on the whole show satisfying properties and offer a reasonable compromise between these two extremes. © 2016 The Authors. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Adaptive photoacoustic imaging quality optimization with EMD and reconstruction
NASA Astrophysics Data System (ADS)
Guo, Chengwen; Ding, Yao; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul L.
2016-10-01
Biomedical photoacoustic (PA) signal is characterized with extremely low signal to noise ratio which will yield significant artifacts in photoacoustic tomography (PAT) images. Since PA signals acquired by ultrasound transducers are non-linear and non-stationary, traditional data analysis methods such as Fourier and wavelet method cannot give useful information for further research. In this paper, we introduce an adaptive method to improve the quality of PA imaging based on empirical mode decomposition (EMD) and reconstruction. Data acquired by ultrasound transducers are adaptively decomposed into several intrinsic mode functions (IMFs) after a sifting pre-process. Since noise is randomly distributed in different IMFs, depressing IMFs with more noise while enhancing IMFs with less noise can effectively enhance the quality of reconstructed PAT images. However, searching optimal parameters by means of brute force searching algorithms will cost too much time, which prevent this method from practical use. To find parameters within reasonable time, heuristic algorithms, which are designed for finding good solutions more efficiently when traditional methods are too slow, are adopted in our method. Two of the heuristic algorithms, Simulated Annealing Algorithm, a probabilistic method to approximate the global optimal solution, and Artificial Bee Colony Algorithm, an optimization method inspired by the foraging behavior of bee swarm, are selected to search optimal parameters of IMFs in this paper. The effectiveness of our proposed method is proved both on simulated data and PA signals from real biomedical tissue, which might bear the potential for future clinical PA imaging de-noising.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Feasibility study of a single, elliptical heliocentric Earth-Mars trajectory
NASA Technical Reports Server (NTRS)
Blake, M.; Fulgham, K.; Westrup, S.
1989-01-01
The initial intent of this design project was to evaluate the existence and feasibility of a single elliptical heliocentric Earth/Mars trajectory. This trajectory was constrained to encounter Mars twice in its orbit, within a time interval of 15 to 180 Earth days between encounters. The single ellipse restriction was soon found to be prohibitive for reasons shown later. Therefore, the approach taken in the design of the round-trip mission to Mars was to construct single-leg trajectories which connected two planets on two prescribed dates. Three methods of trajectory design were developed. Method 1 is an eclectic approach and employs Gaussian Orbit Determination (Method 1A) and Lambert-Euler Preliminary Orbit Determination (Method 1B) in conjunction with each other. Method 2 is an additional version of Lambert's Solution to orbit determination, and both a coplanar and a noncoplanar solution were developed within Method 2. In each of these methods, the fundamental variables are two position vectors and the time between the position vectors. In all methods, the motion was considered Keplerian motion and the reference frame origin was located at the sun. Perturbative effects were not considered in Method 1. The feasibility study of round-trip Earth/Mars trajectories maintains generality by considering only heliocentric trajectory parameters and planetary approach conditions. The coordinates and velocity components of the planets, for the standard epoch J2000, were computed from an approximate set of osculating elements by the procedure outlined in an ephemeris of coordinates.