On the integration of reinforcement learning and approximate reasoning for control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
The author discusses the importance of strengthening the knowledge representation characteristic of reinforcement learning techniques using methods such as approximate reasoning. The ARIC (approximate reasoning-based intelligent control) architecture is an example of such a hybrid approach in which the fuzzy control rules are modified (fine-tuned) using reinforcement learning. ARIC also demonstrates that it is possible to start with an approximately correct control knowledge base and learn to refine this knowledge through further experience. On the other hand, techniques such as the TD (temporal difference) algorithm and Q-learning establish stronger theoretical foundations for their use in adaptive control and also in stability analysis of hybrid reinforcement learning and approximate reasoning-based controllers.
Artificial neural networks and approximate reasoning for intelligent control in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.
Application of plausible reasoning to AI-based control systems
NASA Technical Reports Server (NTRS)
Berenji, Hamid; Lum, Henry, Jr.
1987-01-01
Some current approaches to plausible reasoning in artificial intelligence are reviewed and discussed. Some of the most significant recent advances in plausible and approximate reasoning are examined. A synergism among the techniques of uncertainty management is advocated, and brief discussions on the certainty factor approach, probabilistic approach, Dempster-Shafer theory of evidence, possibility theory, linguistic variables, and fuzzy control are presented. Some extensions to these methods are described, and the applications of the methods are considered.
Detection of Natural Fractures from Observed Surface Seismic Data Based on a Linear-Slip Model
NASA Astrophysics Data System (ADS)
Chen, Huaizhen; Zhang, Guangzhi
2018-03-01
Natural fractures play an important role in migration of hydrocarbon fluids. Based on a rock physics effective model, the linear-slip model, which defines fracture parameters (fracture compliances) for quantitatively characterizing the effects of fractures on rock total compliance, we propose a method to detect natural fractures from observed seismic data via inversion for the fracture compliances. We first derive an approximate PP-wave reflection coefficient in terms of fracture compliances. Using the approximate reflection coefficient, we derive azimuthal elastic impedance as a function of fracture compliances. An inversion method to estimate fracture compliances from seismic data is presented based on a Bayesian framework and azimuthal elastic impedance, which is implemented in a two-step procedure: a least-squares inversion for azimuthal elastic impedance and an iterative inversion for fracture compliances. We apply the inversion method to synthetic and real data to verify its stability and reasonability. Synthetic tests confirm that the method can make a stable estimation of fracture compliances in the case of seismic data containing a moderate signal-to-noise ratio for Gaussian noise, and the test on real data reveals that reasonable fracture compliances are obtained using the proposed method.
Advanced Methods of Approximate Reasoning
1990-11-30
about Knowledge and Action. Technical Note 191, Menlo Park, California: SRI International. 1980 . 20 [26] N.J. Nilsson. Probabilistic logic. Artificial...reasoning. Artificial Intelligence, 13:81-132, 1980 . S[30 R. Reiter. On close world data bases. In H. Gallaire and J. Minker, editors, Logic and Data...specially grateful to Dr. Abraham Waksman of the Air Force Office of Scientific Research and Dr. David Hislop of the Army Research Office for their
ERIC Educational Resources Information Center
CAPOBIANCO, RUDOLPH J.; AND OTHERS
A STUDY WAS MADE TO ESTABLISH AND ANALYZE THE METHODS OF SOLVING INDUCTIVE REASONING PROBLEMS BY MENTALLY RETARDED CHILDREN. THE MAJOR OBJECTIVES WERE--(1) TO EXPLORE AND DESCRIBE REASONING IN MENTALLY RETARDED CHILDREN, (2) TO COMPARE THEIR METHODS WITH THOSE UTILIZED BY NORMAL CHILDREN OF APPROXIMATELY THE SAME MENTAL AGE, (3) TO EXPLORE THE…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Darby, John L.
LinguisticBelief is a Java computer code that evaluates combinations of linguistic variables using an approximate reasoning rule base. Each variable is comprised of fuzzy sets, and a rule base describes the reasoning on combinations of variables fuzzy sets. Uncertainty is considered and propagated through the rule base using the belief/plausibility measure. The mathematics of fuzzy sets, approximate reasoning, and belief/ plausibility are complex. Without an automated tool, this complexity precludes their application to all but the simplest of problems. LinguisticBelief automates the use of these techniques, allowing complex problems to be evaluated easily. LinguisticBelief can be used free of chargemore » on any Windows XP machine. This report documents the use and structure of the LinguisticBelief code, and the deployment package for installation client machines.« less
AVCS Simulator Test Plan and Design Guide
NASA Technical Reports Server (NTRS)
Shelden, Stephen
2001-01-01
Internal document for communication of AVCS direction and documentation of simulator functionality. Discusses methods for AVCS simulation evaluation of pilot functions, implementation strategy of varying functional representation of pilot tasks (by instantiations of a base AVCS to reasonably approximate the interface of various vehicles -- e.g. Altair, GlobalHawk, etc.).
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Discovering relevance knowledge in data: a growing cell structures approach.
Azuaje, F; Dubitzky, W; Black, N; Adamson, K
2000-01-01
Both information retrieval and case-based reasoning systems rely on effective and efficient selection of relevant data. Typically, relevance in such systems is approximated by similarity or indexing models. However, the definition of what makes data items similar or how they should be indexed is often nontrivial and time-consuming. Based on growing cell structure artificial neural networks, this paper presents a method that automatically constructs a case retrieval model from existing data. Within the case-based reasoning (CBR) framework, the method is evaluated for two medical prognosis tasks, namely, colorectal cancer survival and coronary heart disease risk prognosis. The results of the experiments suggest that the proposed method is effective and robust. To gain a deeper insight and understanding of the underlying mechanisms of the proposed model, a detailed empirical analysis of the models structural and behavioral properties is also provided.
2012-12-01
acoustics One begins with Eikonal equation for the acoustic phase function S(t,x) as derived from the geometric acoustics (high frequency) approximation to...zb(x) is smooth and reasonably approximated as piecewise linear. The time domain ray (characteristic) equations for the Eikonal equation are ẋ(t)= c...travel time is affected, which is more physically relevant than global error in φ since it provides the phase information for the Eikonal equation (2.1
NASA Astrophysics Data System (ADS)
Bonetto, P.; Qi, Jinyi; Leahy, R. M.
2000-08-01
Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.
Calculation of wing response to gusts and blast waves with vortex lift effect
NASA Technical Reports Server (NTRS)
Chao, D. C.; Lan, C. E.
1983-01-01
A numerical study of the response of aircraft wings to atmospheric gusts and to nuclear explosions when flying at subsonic speeds is presented. The method is based upon unsteady quasi-vortex lattice method, unsteady suction analogy and Pade approximant. The calculated results, showing vortex lag effect, yield reasonable agreement with experimental data for incremental lift on wings in gust penetration and due to nuclear blast waves.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Lognormal Approximations of Fault Tree Uncertainty Distributions.
El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P
2018-01-26
Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.
Probabilistic Reasoning for Plan Robustness
NASA Technical Reports Server (NTRS)
Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.
2005-01-01
A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.
Toward Webscale, Rule-Based Inference on the Semantic Web Via Data Parallelism
2013-02-01
Another work distinct from its peers is the work on approximate reasoning by Rudolph et al. [34] in which multiple inference sys- tems were combined not...Workshop Scalable Semantic Web Knowledge Base Systems, 2010, pp. 17–31. [34] S. Rudolph , T. Tserendorj, and P. Hitzler, “What is approximate reasoning...2013] [55] M. Duerst and M. Suignard. (2005, Jan .). RFC 3987 – internationalized resource identifiers (IRIs). IETF. [Online]. Available: http
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Zaky, M. A.
2015-01-01
In this paper, we propose and analyze an efficient operational formulation of spectral tau method for multi-term time-space fractional differential equation with Dirichlet boundary conditions. The shifted Jacobi operational matrices of Riemann-Liouville fractional integral, left-sided and right-sided Caputo fractional derivatives are presented. By using these operational matrices, we propose a shifted Jacobi tau method for both temporal and spatial discretizations, which allows us to present an efficient spectral method for solving such problem. Furthermore, the error is estimated and the proposed method has reasonable convergence rates in spatial and temporal discretizations. In addition, some known spectral tau approximations can be derived as special cases from our algorithm if we suitably choose the corresponding special cases of Jacobi parameters θ and ϑ. Finally, in order to demonstrate its accuracy, we compare our method with those reported in the literature.
Mean-field approximation for spacing distribution functions in classical systems
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2012-01-01
We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
The frozen nucleon approximation in two-particle two-hole response functions
Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.; ...
2017-07-10
Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less
The frozen nucleon approximation in two-particle two-hole response functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.
Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less
A reinforcement learning-based architecture for fuzzy logic control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1992-01-01
This paper introduces a new method for learning to refine a rule-based fuzzy logic controller. A reinforcement learning technique is used in conjunction with a multilayer neural network model of a fuzzy controller. The approximate reasoning based intelligent control (ARIC) architecture proposed here learns by updating its prediction of the physical system's behavior and fine tunes a control knowledge base. Its theory is related to Sutton's temporal difference (TD) method. Because ARIC has the advantage of using the control knowledge of an experienced operator and fine tuning it through the process of learning, it learns faster than systems that train networks from scratch. The approach is applied to a cart-pole balancing system.
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-01-27
Here, we estimate the capacity value of concentrating solar power (CSP) plants without thermal energy storage in the southwestern U.S. Our results show that CSP plants have capacity values that are between 45% and 95% of maximum capacity, depending on their location and configuration. We also examine the sensitivity of the capacity value of CSP to a number of factors and show that capacity factor-based methods can provide reasonable approximations of reliability-based estimates.
Mean-field approximation for spacing distribution functions in classical systems.
González, Diego Luis; Pimpinelli, Alberto; Einstein, T L
2012-01-01
We propose a mean-field method to calculate approximately the spacing distribution functions p((n))(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p((n))(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed. © 2012 American Physical Society
An experiment-based comparative study of fuzzy logic control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Chen, Yung-Yaw; Lee, Chuen-Chein; Murugesan, S.; Jang, Jyh-Shing
1989-01-01
An approach is presented to the control of a dynamic physical system through the use of approximate reasoning. The approach has been implemented in a program named POLE, and the authors have successfully built a prototype hardware system to solve the cartpole balancing problem in real-time. The approach provides a complementary alternative to the conventional analytical control methodology and is of substantial use when a precise mathematical model of the process being controlled is not available. A set of criteria for comparing controllers based on approximate reasoning and those based on conventional control schemes is furnished.
Incorporation of varying types of temporal data in a neural network
NASA Technical Reports Server (NTRS)
Cohen, M. E.; Hudson, D. L.
1992-01-01
Most neural network models do not specifically deal with temporal data. Handling of these variables is complicated by the different uses to which temporal data are put, depending on the application. Even within the same application, temporal variables are often used in a number of different ways. In this paper, types of temporal data are discussed, along with their implications for approximate reasoning. Methods for integrating approximate temporal reasoning into existing neural network structures are presented. These methods are illustrated in a medical application for diagnosis of graft-versus-host disease which requires the use of several types of temporal data.
Energy conservation - A test for scattering approximations
NASA Technical Reports Server (NTRS)
Acquista, C.; Holland, A. C.
1980-01-01
The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.
Iterative CT reconstruction using coordinate descent with ordered subsets of data
NASA Astrophysics Data System (ADS)
Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.
2016-04-01
Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.
Uncertainty management by relaxation of conflicting constraints in production process scheduling
NASA Technical Reports Server (NTRS)
Dorn, Juergen; Slany, Wolfgang; Stary, Christian
1992-01-01
Mathematical-analytical methods as used in Operations Research approaches are often insufficient for scheduling problems. This is due to three reasons: the combinatorial complexity of the search space, conflicting objectives for production optimization, and the uncertainty in the production process. Knowledge-based techniques, especially approximate reasoning and constraint relaxation, are promising ways to overcome these problems. A case study from an industrial CIM environment, namely high-grade steel production, is presented to demonstrate how knowledge-based scheduling with the desired capabilities could work. By using fuzzy set theory, the applied knowledge representation technique covers the uncertainty inherent in the problem domain. Based on this knowledge representation, a classification of jobs according to their importance is defined which is then used for the straightforward generation of a schedule. A control strategy which comprises organizational, spatial, temporal, and chemical constraints is introduced. The strategy supports the dynamic relaxation of conflicting constraints in order to improve tentative schedules.
Sponer, Jiří; Sponer, Judit E; Mládek, Arnošt; Jurečka, Petr; Banáš, Pavel; Otyepka, Michal
2013-12-01
Base stacking is a major interaction shaping up and stabilizing nucleic acids. During the last decades, base stacking has been extensively studied by experimental and theoretical methods. Advanced quantum-chemical calculations clarified that base stacking is a common interaction, which in the first approximation can be described as combination of the three most basic contributions to molecular interactions, namely, electrostatic interaction, London dispersion attraction and short-range repulsion. There is not any specific π-π energy term associated with the delocalized π electrons of the aromatic rings that cannot be described by the mentioned contributions. The base stacking can be rather reasonably approximated by simple molecular simulation methods based on well-calibrated common force fields although the force fields do not include nonadditivity of stacking, anisotropy of dispersion interactions, and some other effects. However, description of stacking association in condensed phase and understanding of the stacking role in biomolecules remain a difficult problem, as the net base stacking forces always act in a complex and context-specific environment. Moreover, the stacking forces are balanced with many other energy contributions. Differences in definition of stacking in experimental and theoretical studies are explained. Copyright © 2013 Wiley Periodicals, Inc.
Discrimination of Mixed Taste Solutions using Ultrasonic Wave and Soft Computing
NASA Astrophysics Data System (ADS)
Kojima, Yohichiro; Kimura, Futoshi; Mikami, Tsuyoshi; Kitama, Masataka
In this study, ultrasonic wave acoustic properties of mixed taste solutions were investigated, and the possibility of taste sensing based on the acoustical properties obtained was examined. In previous studies, properties of solutions were discriminated based on sound velocity, amplitude and frequency characteristics of ultrasonic waves propagating through the five basic taste solutions and marketed beverages. However, to make this method applicable to beverages that contain many taste substances, further studies are required. In this paper, the waveform of an ultrasonic wave with frequency of approximately 5 MHz propagating through mixed solutions composed of sweet and salty substance was measured. As a result, differences among solutions were clearly observed as differences in their properties. Furthermore, these mixed solutions were discriminated by a self-organizing neural network. The ratio of volume in their mixed solutions was estimated by a distance-type fuzzy reasoning method. Therefore, the possibility of taste sensing was shown by using ultrasonic wave acoustic properties and the soft computing, such as the self-organizing neural network and the distance-type fuzzy reasoning method.
Transport of phase space densities through tetrahedral meshes using discrete flow mapping
NASA Astrophysics Data System (ADS)
Bajars, Janis; Chappell, David J.; Søndergaard, Niels; Tanner, Gregor
2017-01-01
Discrete flow mapping was recently introduced as an efficient ray based method determining wave energy distributions in complex built up structures. Wave energy densities are transported along ray trajectories through polygonal mesh elements using a finite dimensional approximation of a ray transfer operator. In this way the method can be viewed as a smoothed ray tracing method defined over meshed surfaces. Many applications require the resolution of wave energy distributions in three-dimensional domains, such as in room acoustics, underwater acoustics and for electromagnetic cavity problems. In this work we extend discrete flow mapping to three-dimensional domains by propagating wave energy densities through tetrahedral meshes. The geometric simplicity of the tetrahedral mesh elements is utilised to efficiently compute the ray transfer operator using a mixture of analytic and spectrally accurate numerical integration. The important issue of how to choose a suitable basis approximation in phase space whilst maintaining a reasonable computational cost is addressed via low order local approximations on tetrahedral faces in the position coordinate and high order orthogonal polynomial expansions in momentum space.
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay equivalence often fails to converge, while flow equivalent aggregation can lead to potentially bad results if a strong dependence of the mean completion time on the interarrival process exists.
NASA Astrophysics Data System (ADS)
Shadid, J. N.; Smith, T. M.; Cyr, E. C.; Wildey, T. M.; Pawlowski, R. P.
2016-09-01
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier-Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, J.N., E-mail: jnshadi@sandia.gov; Department of Mathematics and Statistics, University of New Mexico; Smith, T.M.
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts tomore » apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Formanek, Martin; Vana, Martin; Houfek, Karel
2010-09-30
We compare efficiency of two methods for numerical solution of the time-dependent Schroedinger equation, namely the Chebyshev method and the recently introduced generalized Crank-Nicholson method. As a testing system the free propagation of a particle in one dimension is used. The space discretization is based on the high-order finite diferences to approximate accurately the kinetic energy operator in the Hamiltonian. We show that the choice of the more effective method depends on how many wave functions must be calculated during the given time interval to obtain relevant and reasonably accurate information about the system, i.e. on the choice of themore » time step.« less
An improved method for predicting the effects of flight on jet mixing noise
NASA Technical Reports Server (NTRS)
Stone, J. R.
1979-01-01
The NASA method (1976) for predicting the effects of flight on jet mixing noise was improved. The earlier method agreed reasonably well with experimental flight data for jet velocities up to about 520 m/sec (approximately 1700 ft/sec). The poorer agreement at high jet velocities appeared to be due primarily to the manner in which supersonic convection effects were formulated. The purely empirical supersonic convection formulation of the earlier method was replaced by one based on theoretical considerations. Other improvements of an empirical nature included were based on model-jet/free-jet simulated flight tests. The revised prediction method is presented and compared with experimental data obtained from the Bertin Aerotrain with a J85 engine, the DC-10 airplane with JT9D engines, and the DC-9 airplane with refanned JT8D engines. It is shown that the new method agrees better with the data base than a recently proposed SAE method.
A variable vertical resolution weather model with an explicitly resolved planetary boundary layer
NASA Technical Reports Server (NTRS)
Helfand, H. M.
1981-01-01
A version of the fourth order weather model incorporating surface wind stress data from SEASAT A scatterometer observations is presented. The Monin-Obukhov similarity theory is used to relate winds at the top of the surface layer to surface wind stress. A reasonable approximation of surface fluxes of heat, moisture, and momentum are obtainable using this method. A Richardson number adjustment scheme based on the ideas of Chang is used to allow for turbulence effects.
Overview of psychiatric ethics IV: the method of casuistry.
Robertson, Michael; Ryan, Christopher; Walter, Garry
2007-08-01
The aim of this paper is to describe the method of ethical analysis known as casuistry and consider its merits as a basis of ethical deliberation in psychiatry. Casuistry approximates the legal arguments of common law. It examines ethical dilemmas by adopting a taxonomic approach to 'paradigm' cases, using a technique akin to that of normative analogical reasoning. Casuistry offers a useful method in ethical reasoning through providing a practical means of evaluating the merits of a particular course of action in a particular clinical situation. As a method ethical moral reasoning in psychiatry, casuistry suffers from a paucity of paradigm cases and its failure to fully contextualize ethical dilemmas by relying on common morality theory as its basis.
Detecting Edges in Images by Use of Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steve
2003-01-01
A method of processing digital image data to detect edges includes the use of fuzzy reasoning. The method is completely adaptive and does not require any advance knowledge of an image. During initial processing of image data at a low level of abstraction, the nature of the data is indeterminate. Fuzzy reasoning is used in the present method because it affords an ability to construct useful abstractions from approximate, incomplete, and otherwise imperfect sets of data. Humans are able to make some sense of even unfamiliar objects that have imperfect high-level representations. It appears that to perceive unfamiliar objects or to perceive familiar objects in imperfect images, humans apply heuristic algorithms to understand the images
Vipsita, Swati; Rath, Santanu Kumar
2015-01-01
Protein superfamily classification deals with the problem of predicting the family membership of newly discovered amino acid sequence. Although many trivial alignment methods are already developed by previous researchers, but the present trend demands the application of computational intelligent techniques. As there is an exponential growth in size of biological database, retrieval and inference of essential knowledge in the biological domain become a very cumbersome task. This problem can be easily handled using intelligent techniques due to their ability of tolerance for imprecision, uncertainty, approximate reasoning, and partial truth. This paper discusses the various global and local features extracted from full length protein sequence which are used for the approximation and generalisation of the classifier. The various parameters used for evaluating the performance of the classifiers are also discussed. Therefore, this review article can show right directions to the present researchers to make an improvement over the existing methods.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
1989-10-31
fo tmaa OmfuogeM ara Mmi. fal in fM?05V~ ~ ~ ~ ~ ~ A D A 2 4 0409"~ n ugt Psoo,@’ oducbof Proton (07044 136M. WagaWapN. DC 20141 T1 3. REPORT TYPE...Al (circumscription, non- monotonic reasoning, and default reasoning), our approach is based on fuzzy logic and, more specifically, on the theory of
Generation of tunable laser sidebands in the far-infrared region
NASA Technical Reports Server (NTRS)
Farhoomand, J.; Frerking, M. A.; Pickett, H. M.; Blake, G. A.
1985-01-01
In recent years, several techniques have been developed for the generation of tunable coherent radiation at submillimeter and far-infrared (FIR) wavelengths. The harmonic generation of conventional microwave sources has made it possible to produce spectrometers capable of continuous operation to above 1000 GHz. However, the sensitivity of such instruments drops rapidly with frequency. For this reason, a great deal of attention is given to laser-based methods, which could cover the entire FIR region. Tunable FIR radiation (approximately 100 nW) has been produced by mixing FIR molecular lasers and conventional microwave sources in both open and closed mixer mounts. The present investigation is concerned with improvements in this approach. These improvements provide approximately thirty times more output power than previous results.
Fuzzy Logic for Incidence Geometry
2016-01-01
The paper presents a mathematical framework for approximate geometric reasoning with extended objects in the context of Geography, in which all entities and their relationships are described by human language. These entities could be labelled by commonly used names of landmarks, water areas, and so forth. Unlike single points that are given in Cartesian coordinates, these geographic entities are extended in space and often loosely defined, but people easily perform spatial reasoning with extended geographic objects “as if they were points.” Unfortunately, up to date, geographic information systems (GIS) miss the capability of geometric reasoning with extended objects. The aim of the paper is to present a mathematical apparatus for approximate geometric reasoning with extended objects that is usable in GIS. In the paper we discuss the fuzzy logic (Aliev and Tserkovny, 2011) as a reasoning system for geometry of extended objects, as well as a basis for fuzzification of the axioms of incidence geometry. The same fuzzy logic was used for fuzzification of Euclid's first postulate. Fuzzy equivalence relation “extended lines sameness” is introduced. For its approximation we also utilize a fuzzy conditional inference, which is based on proposed fuzzy “degree of indiscernibility” and “discernibility measure” of extended points. PMID:27689133
Boitard, Simon; Loisel, Patrice
2007-05-01
The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.
Sengupta, Aritra; Foster, Scott D.; Patterson, Toby A.; Bravington, Mark
2012-01-01
Data assimilation is a crucial aspect of modern oceanography. It allows the future forecasting and backward smoothing of ocean state from the noisy observations. Statistical methods are employed to perform these tasks and are often based on or related to the Kalman filter. Typically Kalman filters assumes that the locations associated with observations are known with certainty. This is reasonable for typical oceanographic measurement methods. Recently, however an alternative and abundant source of data comes from the deployment of ocean sensors on marine animals. This source of data has some attractive properties: unlike traditional oceanographic collection platforms, it is relatively cheap to collect, plentiful, has multiple scientific uses and users, and samples areas of the ocean that are often difficult of costly to sample. However, inherent uncertainty in the location of the observations is a barrier to full utilisation of animal-borne sensor data in data-assimilation schemes. In this article we examine this issue and suggest a simple approximation to explicitly incorporate the location uncertainty, while staying in the scope of Kalman-filter-like methods. The approximation stems from a Taylor-series approximation to elements of the updating equation. PMID:22900005
Logo recognition in video by line profile classification
NASA Astrophysics Data System (ADS)
den Hollander, Richard J. M.; Hanjalic, Alan
2003-12-01
We present an extension to earlier work on recognizing logos in video stills. The logo instances considered here are rigid planar objects observed at a distance in the scene, so the possible perspective transformation can be approximated by an affine transformation. For this reason we can classify the logos by matching (invariant) line profiles. We enhance our previous method by considering multiple line profiles instead of a single profile of the logo. The positions of the lines are based on maxima in the Hough transform space of the segmented logo foreground image. Experiments are performed on MPEG1 sport video sequences to show the performance of the proposed method.
Coherent Anomaly Method Calculation on the Cluster Variation Method. II.
NASA Astrophysics Data System (ADS)
Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya
The critical exponents of the bond percolation model are calculated in the D(= 2,3,…)-dimensional simple cubic lattice on the basis of Suzuki's coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, J. N.; Smith, T. M.; Cyr, E. C.
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less
Shadid, J. N.; Smith, T. M.; Cyr, E. C.; ...
2016-05-20
A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less
Taking stock of medication wastage: Unused medications in US households.
Law, Anandi V; Sakharkar, Prashant; Zargarzadeh, Amir; Tai, Bik Wai Bilvick; Hess, Karl; Hata, Micah; Mireles, Rudolph; Ha, Carolyn; Park, Tony J
2015-01-01
Despite the potential deleterious impact on patient safety, environmental safety and health care expenditures, the extent of unused prescription medications in US households and reasons for nonuse remain unknown. To estimate the extent, type and cost of unused medications and the reasons for their nonuse among US households. A cross-sectional, observational two-phased study was conducted using a convenience sample in Southern California. A web-based survey (Phase I, n = 238) at one health sciences institution and paper-based survey (Phase II, n = 68) at planned drug take-back events at three community pharmacies were conducted. The extent, type, and cost of unused medications and the reasons for their nonuse were collected. Approximately 2 of 3 prescription medications were reported unused; disease/condition improved (42.4%), forgetfulness (5.8%) and side effects (6.5%) were reasons cited for their nonuse. "Throwing medications in the trash" was found being the common method of disposal (63%). In phase I, pain medications (23.3%) and antibiotics (18%) were most commonly reported as unused, whereas in Phase II, 17% of medications for chronic conditions (hypertension, diabetes, cholesterol, heart disease) and 8.3% for mental health problems were commonly reported as unused. Phase II participants indicated pharmacy as a preferred location for drug disposal. The total estimated cost for unused medications was approximately $59,264.20 (average retail Rx price) to $152,014.89 (AWP) from both phases, borne largely by private health insurance. When extrapolated to a national level, it was approximately $2.4B for elderly taking five prescription medications to $5.4B for the 52% of US adults who take one prescription medication daily. Two out of three dispensed medications were unused, with national projected costs ranging from $2.4B to $5.4B. This wastage raises concerns about adherence, cost and safety; additionally, it points to the need for public awareness and policy to reduce wastage. Pharmacists can play an important role by educating patients both on appropriate medication use and disposal. Copyright © 2015 Elsevier Inc. All rights reserved.
Simulations of sooting turbulent jet flames using a hybrid flamelet/stochastic Eulerian field method
NASA Astrophysics Data System (ADS)
Consalvi, Jean-Louis; Nmira, Fatiha; Burot, Daria
2016-03-01
The stochastic Eulerian field method is applied to simulate 12 turbulent C1-C3 hydrocarbon jet diffusion flames covering a wide range of Reynolds numbers and fuel sooting propensities. The joint scalar probability density function (PDF) is a function of the mixture fraction, enthalpy defect, scalar dissipation rate and representative soot properties. Soot production is modelled by a semi-empirical acetylene/benzene-based soot model. Spectral gas and soot radiation is modelled using a wide-band correlated-k model. Emission turbulent radiation interactions (TRIs) are taken into account by means of the PDF method, whereas absorption TRIs are modelled using the optically thin fluctuation approximation. Model predictions are found to be in reasonable agreement with experimental data in terms of flame structure, soot quantities and radiative loss. Mean soot volume fractions are predicted within a factor of two of the experiments whereas radiant fractions and peaks of wall radiative fluxes are within 20%. The study also aims to assess approximate radiative models, namely the optically thin approximation (OTA) and grey medium approximation. These approximations affect significantly the radiative loss and should be avoided if accurate predictions of the radiative flux are desired. At atmospheric pressure, the relative errors that they produced on the peaks of temperature and soot volume fraction are within both experimental and model uncertainties. However, these discrepancies are found to increase with pressure, suggesting that spectral models describing properly the self-absorption should be considered at over-atmospheric pressure.
Research of Litchi Diseases Diagnosis Expertsystem Based on Rbr and Cbr
NASA Astrophysics Data System (ADS)
Xu, Bing; Liu, Liqun
To conquer the bottleneck problems existing in the traditional rule-based reasoning diseases diagnosis system, such as low reasoning efficiency and lack of flexibility, etc.. It researched the integrated case-based reasoning (CBR) and rule-based reasoning (RBR) technology, and put forward a litchi diseases diagnosis expert system (LDDES) with integrated reasoning method. The method use data mining and knowledge obtaining technology to establish knowledge base and case library. It adopt rules to instruct the retrieval and matching for CBR, and use association rule and decision trees algorithm to calculate case similarity.The experiment shows that the method can increase the system's flexibility and reasoning ability, and improve the accuracy of litchi diseases diagnosis.
[Theory, method and application of method R on estimation of (co)variance components].
Liu, Wen-Zhong
2004-07-01
Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.
NASA Technical Reports Server (NTRS)
Hoebel, Louis J.
1993-01-01
The problem of plan generation (PG) and the problem of plan execution monitoring (PEM), including updating, queries, and resource-bounded replanning, have different reasoning and representation requirements. PEM requires the integration of qualitative and quantitative information. PEM is the receiving of data about the world in which a plan or agent is executing. The problem is to quickly determine the relevance of the data, the consistency of the data with respect to the expected effects, and if execution should continue. Only spatial and temporal aspects of the plan are addressed for relevance in this work. Current temporal reasoning systems are deficient in computational aspects or expressiveness. This work presents a hybrid qualitative and quantitative system that is fully expressive in its assertion language while offering certain computational efficiencies. In order to proceed, methods incorporating approximate reasoning using hierarchies, notions of locality, constraint expansion, and absolute parameters need be used and are shown to be useful for the anytime nature of PEM.
Approximate reasoning-based learning and control for proximity operations and docking in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Jani, Yashvant; Lea, Robert N.
1991-01-01
A recently proposed hybrid-neutral-network and fuzzy-logic-control architecture is applied to a fuzzy logic controller developed for attitude control of the Space Shuttle. A model using reinforcement learning and learning from past experience for fine-tuning its knowledge base is proposed. Two main components of this approximate reasoning-based intelligent control (ARIC) model - an action-state evaluation network and action selection network are described as well as the Space Shuttle attitude controller. An ARIC model for the controller is presented, and it is noted that the input layer in each network includes three nodes representing the angle error, angle error rate, and bias node. Preliminary results indicate that the controller can hold the pitch rate within its desired deadband and starts to use the jets at about 500 sec in the run.
NASA Astrophysics Data System (ADS)
Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya
1991-10-01
The critical exponents of the bond percolation model are calculated in the D(=2, 3, \\cdots)-dimensional simple cubic lattice on the basis of Suzuki’s coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.
Application of two direct runoff prediction methods in Puerto Rico
Sepulveda, N.
1997-01-01
Two methods for predicting direct runoff from rainfall data were applied to several basins and the resulting hydrographs compared to measured values. The first method uses a geomorphology-based unit hydrograph to predict direct runoff through its convolution with the excess rainfall hyetograph. The second method shows how the resulting hydraulic routing flow equation from a kinematic wave approximation is solved using a spectral method based on the matrix representation of the spatial derivative with Chebyshev collocation and a fourth-order Runge-Kutta time discretization scheme. The calibrated Green-Ampt (GA) infiltration parameters are obtained by minimizing the sum, over several rainfall events, of absolute differences between the total excess rainfall volume computed from the GA equations and the total direct runoff volume computed from a hydrograph separation technique. The improvement made in predicting direct runoff using a geomorphology-based unit hydrograph with the ephemeral and perennial stream network instead of the strictly perennial stream network is negligible. The hydraulic routing scheme presented here is highly accurate in predicting the magnitude and time of the hydrograph peak although the much faster unit hydrograph method also yields reasonable results.
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
NASA Astrophysics Data System (ADS)
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
Learning and tuning fuzzy logic controllers through reinforcements.
Berenji, H R; Khedkar, P
1992-01-01
A method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. It is shown that: the generalized approximate-reasoning-based intelligent control (GARIC) architecture learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
Cadeddu, Maria P.; Marchand, Roger; Orlandi, Emiliano; ...
2017-08-11
Satellite and ground-based microwave radiometers are routinely used for the retrieval of liquid water path (LWP) under all atmospheric conditions. The retrieval of water vapor and LWP from ground-based radiometers during rain has proved to be a difficult challenge for two principal reasons: the inadequacy of the nonscattering approximation in precipitating clouds and the deposition of rain drops on the instrument's radome. In this paper, we combine model computations and real ground-based, zenith-viewing passive microwave radiometer brightness temperature measurements to investigate how total, cloud, and rain LWP retrievals are affected by assumptions on the cloud drop size distribution (DSD) andmore » under which conditions a nonscattering approximation can be considered reasonably accurate. Results show that until the drop effective diameter is larger than similar to 200 mu m, a nonscattering approximation yields results that are still accurate at frequencies less than 90 GHz. For larger drop sizes, it is shown that higher microwave frequencies contain useful information that can be used to separate cloud and rain LWP provided that the vertical distribution of hydrometeors, as well as the DSD, is reasonably known. The choice of the DSD parameters becomes important to ensure retrievals that are consistent with the measurements. A physical retrieval is tested on a synthetic data set and is then used to retrieve total, cloud, and rain LWP from radiometric measurements during two drizzling cases at the atmospheric radiation measurement Eastern North Atlantic site.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cadeddu, Maria P.; Marchand, Roger; Orlandi, Emiliano
Satellite and ground-based microwave radiometers are routinely used for the retrieval of liquid water path (LWP) under all atmospheric conditions. The retrieval of water vapor and LWP from ground-based radiometers during rain has proved to be a difficult challenge for two principal reasons: the inadequacy of the nonscattering approximation in precipitating clouds and the deposition of rain drops on the instrument's radome. In this paper, we combine model computations and real ground-based, zenith-viewing passive microwave radiometer brightness temperature measurements to investigate how total, cloud, and rain LWP retrievals are affected by assumptions on the cloud drop size distribution (DSD) andmore » under which conditions a nonscattering approximation can be considered reasonably accurate. Results show that until the drop effective diameter is larger than similar to 200 mu m, a nonscattering approximation yields results that are still accurate at frequencies less than 90 GHz. For larger drop sizes, it is shown that higher microwave frequencies contain useful information that can be used to separate cloud and rain LWP provided that the vertical distribution of hydrometeors, as well as the DSD, is reasonably known. The choice of the DSD parameters becomes important to ensure retrievals that are consistent with the measurements. A physical retrieval is tested on a synthetic data set and is then used to retrieve total, cloud, and rain LWP from radiometric measurements during two drizzling cases at the atmospheric radiation measurement Eastern North Atlantic site.« less
Müsken, Mathias; Di Fiore, Stefano; Römling, Ute; Häussler, Susanne
2010-08-01
A major reason for bacterial persistence during chronic infections is the survival of bacteria within biofilm structures, which protect cells from environmental stresses, host immune responses and antimicrobial therapy. Thus, there is concern that laboratory methods developed to measure the antibiotic susceptibility of planktonic bacteria may not be relevant to chronic biofilm infections, and it has been suggested that alternative methods should test antibiotic susceptibility within a biofilm. In this paper, we describe a fast and reliable protocol for using 96-well microtiter plates for the formation of Pseudomonas aeruginosa biofilms; the method is easily adaptable for antimicrobial susceptibility testing. This method is based on bacterial viability staining in combination with automated confocal laser scanning microscopy. The procedure simplifies qualitative and quantitative evaluation of biofilms and has proven to be effective for standardized determination of antibiotic efficiency on P. aeruginosa biofilms. The protocol can be performed within approximately 60 h.
A comparative study of an ABC and an artificial absorber for truncating finite element meshes
NASA Technical Reports Server (NTRS)
Oezdemir, T.; Volakis, John L.
1993-01-01
The type of mesh termination used in the context of finite element formulations plays a major role on the efficiency and accuracy of the field solution. The performance of an absorbing boundary condition (ABC) and an artificial absorber (a new concept) for terminating the finite element mesh was evaluated. This analysis is done in connection with the problem of scattering by a finite slot array in a thick ground plane. The two approximate mesh truncation schemes are compared with the exact finite element-boundary integral (FEM-BI) method in terms of accuracy and efficiency. It is demonstrated that both approximate truncation schemes yield reasonably accurate results even when the mesh is extended only 0.3 wavelengths away from the array aperture. However, the artificial absorber termination method leads to a substantially more efficient solution. Moreover, it is shown that the FEM-BI method remains quite competitive with the FEM-artificial absorber method when the FFT is used for computing the matrix-vector products in the iterative solution algorithm. These conclusions are indeed surprising and of major importance in electromagnetic simulations based on the finite element method.
Automatic Detection of Driver Fatigue Using Driving Operation Information for Transportation Safety
Li, Zuojin; Chen, Liukui; Peng, Jun; Wu, Ying
2017-01-01
Fatigued driving is a major cause of road accidents. For this reason, the method in this paper is based on the steering wheel angles (SWA) and yaw angles (YA) information under real driving conditions to detect drivers’ fatigue levels. It analyzes the operation features of SWA and YA under different fatigue statuses, then calculates the approximate entropy (ApEn) features of a short sliding window on time series. Using the nonlinear feature construction theory of dynamic time series, with the fatigue features as input, designs a “2-6-6-3” multi-level back propagation (BP) Neural Networks classifier to realize the fatigue detection. An approximately 15-h experiment is carried out on a real road, and the data retrieved are segmented and labeled with three fatigue levels after expert evaluation, namely “awake”, “drowsy” and “very drowsy”. The average accuracy of 88.02% in fatigue identification was achieved in the experiment, endorsing the value of the proposed method for engineering applications. PMID:28587072
Automatic Detection of Driver Fatigue Using Driving Operation Information for Transportation Safety.
Li, Zuojin; Chen, Liukui; Peng, Jun; Wu, Ying
2017-05-25
Fatigued driving is a major cause of road accidents. For this reason, the method in this paper is based on the steering wheel angles (SWA) and yaw angles (YA) information under real driving conditions to detect drivers' fatigue levels. It analyzes the operation features of SWA and YA under different fatigue statuses, then calculates the approximate entropy (ApEn) features of a short sliding window on time series. Using the nonlinear feature construction theory of dynamic time series, with the fatigue features as input, designs a "2-6-6-3" multi-level back propagation (BP) Neural Networks classifier to realize the fatigue detection. An approximately 15-h experiment is carried out on a real road, and the data retrieved are segmented and labeled with three fatigue levels after expert evaluation, namely "awake", "drowsy" and "very drowsy". The average accuracy of 88.02% in fatigue identification was achieved in the experiment, endorsing the value of the proposed method for engineering applications.
Analytical approximations for the oscillators with anti-symmetric quadratic nonlinearity
NASA Astrophysics Data System (ADS)
Alal Hosen, Md.; Chowdhury, M. S. H.; Yeakub Ali, Mohammad; Faris Ismail, Ahmad
2017-12-01
A second-order ordinary differential equation involving anti-symmetric quadratic nonlinearity changes sign. The behaviour of the oscillators with an anti-symmetric quadratic nonlinearity is assumed to oscillate different in the positive and negative directions. In this reason, Harmonic Balance Method (HBM) cannot be directly applied. The main purpose of the present paper is to propose an analytical approximation technique based on the HBM for obtaining approximate angular frequencies and the corresponding periodic solutions of the oscillators with anti-symmetric quadratic nonlinearity. After applying HBM, a set of complicated nonlinear algebraic equations is found. Analytical approach is not always fruitful for solving such kinds of nonlinear algebraic equations. In this article, two small parameters are found, for which the power series solution produces desired results. Moreover, the amplitude-frequency relationship has also been determined in a novel analytical way. The presented technique gives excellent results as compared with the corresponding numerical results and is better than the existing ones.
Origin of spin reorientation transitions in antiferromagnetic MnPt-based alloys
NASA Astrophysics Data System (ADS)
Chang, P.-H.; Zhuravlev, I. A.; Belashchenko, K. D.
2018-04-01
Antiferromagnetic MnPt exhibits a spin reorientation transition (SRT) as a function of temperature, and off-stoichiometric Mn-Pt alloys also display SRTs as a function of concentration. The magnetocrystalline anisotropy in these alloys is studied using first-principles calculations based on the coherent potential approximation and the disordered local moment method. The anisotropy is fairly small and sensitive to the variations in composition and temperature due to the cancellation of large contributions from different parts of the Brillouin zone. Concentration and temperature-driven SRTs are found in reasonable agreement with experimental data. Contributions from specific band-structure features are identified and used to explain the origin of the SRTs.
A machine independent expert system for diagnosing environmentally induced spacecraft anomalies
NASA Technical Reports Server (NTRS)
Rolincik, Mark J.
1991-01-01
A new rule-based, machine independent analytical tool for diagnosing spacecraft anomalies, the EnviroNET expert system, was developed. Expert systems provide an effective method for storing knowledge, allow computers to sift through large amounts of data pinpointing significant parts, and most importantly, use heuristics in addition to algorithms which allow approximate reasoning and inference, and the ability to attack problems not rigidly defines. The EviroNET expert system knowledge base currently contains over two hundred rules, and links to databases which include past environmental data, satellite data, and previous known anomalies. The environmental causes considered are bulk charging, single event upsets (SEU), surface charging, and total radiation dose.
Fast computation of the electrolyte-concentration transfer function of a lithium-ion cell model
NASA Astrophysics Data System (ADS)
Rodríguez, Albert; Plett, Gregory L.; Trimboli, M. Scott
2017-08-01
One approach to creating physics-based reduced-order models (ROMs) of battery-cell dynamics requires first generating linearized Laplace-domain transfer functions of all cell internal electrochemical variables of interest. Then, the resulting infinite-dimensional transfer functions can be reduced by various means in order to find an approximate low-dimensional model. These methods include Padé approximation or the Discrete-Time Realization algorithm. In a previous article, Lee and colleagues developed a transfer function of the electrolyte concentration for a porous-electrode pseudo-two-dimensional lithium-ion cell model. Their approach used separation of variables and Sturm-Liouville theory to compute an infinite-series solution to the transfer function, which they then truncated to a finite number of terms for reasons of practicality. Here, we instead use a variation-of-parameters approach to arrive at a different representation of the identical solution that does not require a series expansion. The primary benefits of the new approach are speed of computation of the transfer function and the removal of the requirement to approximate the transfer function by truncating the number of terms evaluated. Results show that the speedup of the new method can be more than 3800.
Quasiparticle self-consistent GW method for the spectral properties of complex materials.
Bruneval, Fabien; Gatti, Matteo
2014-01-01
The GW approximation to the formally exact many-body perturbation theory has been applied successfully to materials for several decades. Since the practical calculations are extremely cumbersome, the GW self-energy is most commonly evaluated using a first-order perturbative approach: This is the so-called G 0 W 0 scheme. However, the G 0 W 0 approximation depends heavily on the mean-field theory that is employed as a basis for the perturbation theory. Recently, a procedure to reach a kind of self-consistency within the GW framework has been proposed. The quasiparticle self-consistent GW (QSGW) approximation retains some positive aspects of a self-consistent approach, but circumvents the intricacies of the complete GW theory, which is inconveniently based on a non-Hermitian and dynamical self-energy. This new scheme allows one to surmount most of the flaws of the usual G 0 W 0 at a moderate calculation cost and at a reasonable implementation burden. In particular, the issues of small band gap semiconductors, of large band gap insulators, and of some transition metal oxides are then cured. The QSGW method broadens the range of materials for which the spectral properties can be predicted with confidence.
Learning deep similarity in fundus photography
NASA Astrophysics Data System (ADS)
Chudzik, Piotr; Al-Diri, Bashir; Caliva, Francesco; Ometto, Giovanni; Hunter, Andrew
2017-02-01
Similarity learning is one of the most fundamental tasks in image analysis. The ability to extract similar images in the medical domain as part of content-based image retrieval (CBIR) systems has been researched for many years. The vast majority of methods used in CBIR systems are based on hand-crafted feature descriptors. The approximation of a similarity mapping for medical images is difficult due to the big variety of pixel-level structures of interest. In fundus photography (FP) analysis, a subtle difference in e.g. lesions and vessels shape and size can result in a different diagnosis. In this work, we demonstrated how to learn a similarity function for image patches derived directly from FP image data without the need of manually designed feature descriptors. We used a convolutional neural network (CNN) with a novel architecture adapted for similarity learning to accomplish this task. Furthermore, we explored and studied multiple CNN architectures. We show that our method can approximate the similarity between FP patches more efficiently and accurately than the state-of- the-art feature descriptors, including SIFT and SURF using a publicly available dataset. Finally, we observe that our approach, which is purely data-driven, learns that features such as vessels calibre and orientation are important discriminative factors, which resembles the way how humans reason about similarity. To the best of authors knowledge, this is the first attempt to approximate a visual similarity mapping in FP.
Testing actinide fission yield treatment in CINDER90 for use in MCNP6 burnup calculations
Fensin, Michael Lorne; Umbel, Marissa
2015-09-18
Most of the development of the MCNPX/6 burnup capability focused on features that were applied to the Boltzman transport or used to prepare coefficients for use in CINDER90, with little change to CINDER90 or the CINDER90 data. Though a scheme exists for best solving the coupled Boltzman and Bateman equations, the most significant approximation is that the employed nuclear data are correct and complete. Thus, the CINDER90 library file contains 60 different actinide fission yields encompassing 36 fissionable actinides (thermal, fast, high energy and spontaneous fission). Fission reaction data exists for more than 60 actinides and as a result, fissionmore » yield data must be approximated for actinides that do not possess fission yield information. Several types of approximations are used for estimating fission yields for actinides which do not possess explicit fission yield data. The objective of this study is to test whether or not certain approximations of fission yield selection have any impact on predictability of major actinides and fission products. Further we assess which other fission products, available in MCNP6 Tier 3, result in the largest difference in production. Because the CINDER90 library file is in ASCII format and therefore easily amendable, we assess reasons for choosing, as well as compare actinide and major fission product prediction for the H. B. Robinson benchmark for, three separate fission yield selection methods: (1) the current CINDER90 library file method (Base); (2) the element method (Element); and (3) the isobar method (Isobar). Results show that the three methods tested result in similar prediction of major actinides, Tc-99 and Cs-137; however, certain fission products resulted in significantly different production depending on the method of choice.« less
Refining fuzzy logic controllers with machine learning
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1994-01-01
In this paper, we describe the GARIC (Generalized Approximate Reasoning-Based Intelligent Control) architecture, which learns from its past performance and modifies the labels in the fuzzy rules to improve performance. It uses fuzzy reinforcement learning which is a hybrid method of fuzzy logic and reinforcement learning. This technology can simplify and automate the application of fuzzy logic control to a variety of systems. GARIC has been applied in simulation studies of the Space Shuttle rendezvous and docking experiments. It has the potential of being applied in other aerospace systems as well as in consumer products such as appliances, cameras, and cars.
Advanced Concepts and Methods of Approximate Reasoning
1989-12-01
immeasurably by numerous conversations and discussions with Nadal Bat- tle, Hamid Berenji , Piero Bonissone, Bernadette Bouchon-Meunier, Miguel Delgado, Di...comments of Claudi Alsina, Hamid Berenji , Piero Bonissone, Didier Dubois, Francesc Esteva, Oscar Firschein, Marty Fischler, Pascal Fua, Maria Angeles
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
Counterfactual reasoning: From childhood to adulthood
Rafetseder, Eva; Schwitalla, Maria; Perner, Josef
2013-01-01
The objective of this study was to describe the developmental progression of counterfactual reasoning from childhood to adulthood. In contrast to the traditional view, it was recently reported by Rafetseder and colleagues that even a majority of 6-year-old children do not engage in counterfactual reasoning when asked counterfactual questions (Child Development, 2010, Vol. 81, pp. 376–389). By continuing to use the same method, the main result of the current Study 1 was that performance of the 9- to 11-year-olds was comparable to that of the 6-year-olds, whereas the 12- to 14-year-olds approximated adult performance. Study 2, using an intuitively simpler task based on Harris and colleagues (Cognition, 1996, Vol. 61, pp. 233–259), resulted in a similar conclusion, specifically that the ability to apply counterfactual reasoning is not fully developed in all children before 12 years of age. We conclude that children who failed our tasks seem to lack an understanding of what needs to be changed (events that are causally dependent on the counterfactual assumption) and what needs to be left unchanged and so needs to be kept as it actually happened. Alternative explanations, particularly executive functioning, are discussed in detail. PMID:23219156
NASA Astrophysics Data System (ADS)
Nepal, Niraj K.; Ruzsinszky, Adrienn; Bates, Jefferson E.
2018-03-01
The ground state structural and energetic properties for rocksalt and cesium chloride phases of the cesium halides were explored using the random phase approximation (RPA) and beyond-RPA methods to benchmark the nonempirical SCAN meta-GGA and its empirical dispersion corrections. The importance of nonadditivity and higher-order multipole moments of dispersion in these systems is discussed. RPA generally predicts the equilibrium volume for these halides within 2.4% of the experimental value, while beyond-RPA methods utilizing the renormalized adiabatic LDA (rALDA) exchange-correlation kernel are typically within 1.8%. The zero-point vibrational energy is small and shows that the stability of these halides is purely due to electronic correlation effects. The rAPBE kernel as a correction to RPA overestimates the equilibrium volume and could not predict the correct phase ordering in the case of cesium chloride, while the rALDA kernel consistently predicted results in agreement with the experiment for all of the halides. However, due to its reasonable accuracy with lower computational cost, SCAN+rVV10 proved to be a good alternative to the RPA-like methods for describing the properties of these ionic solids.
A Summary of Research in Science Education--1984.
ERIC Educational Resources Information Center
Lawson, Anton E.; And Others
This review covers approximately 300 studies, including journal articles, dissertations, and papers presented at conferences. The studies are organized under these major headings: status surveys; scientific reasoning; elementary school science (student achievement, student conceptions/misconceptions, student curiosity/attitudes, teaching methods,…
Williams, Rebecca J.; Tse, Tony; DiPiazza, Katelyn; Zarin, Deborah A.
2015-01-01
Background Clinical trials that end prematurely (or “terminate”) raise financial, ethical, and scientific concerns. The extent to which the results of such trials are disseminated and the reasons for termination have not been well characterized. Methods and Findings A cross-sectional, descriptive study of terminated clinical trials posted on the ClinicalTrials.gov results database as of February 2013 was conducted. The main outcomes were to characterize the availability of primary outcome data on ClinicalTrials.gov and in the published literature and to identify the reasons for trial termination. Approximately 12% of trials with results posted on the ClinicalTrials.gov results database (905/7,646) were terminated. Most trials were terminated for reasons other than accumulated data from the trial (68%; 619/905), with an insufficient rate of accrual being the lead reason for termination among these trials (57%; 350/619). Of the remaining trials, 21% (193/905) were terminated based on data from the trial (findings of efficacy or toxicity) and 10% (93/905) did not specify a reason. Overall, data for a primary outcome measure were available on ClinicalTrials.gov and in the published literature for 72% (648/905) and 22% (198/905) of trials, respectively. Primary outcome data were reported on the ClinicalTrials.gov results database and in the published literature more frequently (91% and 46%, respectively) when the decision to terminate was based on data from the trial. Conclusions Trials terminate for a variety of reasons, not all of which reflect failures in the process or an inability to achieve the intended goals. Primary outcome data were reported most often when termination was based on data from the trial. Further research is needed to identify best practices for disseminating the experience and data resulting from terminated trials in order to help ensure maximal societal benefit from the investments of trial participants and others involved with the study. PMID:26011295
Comparison of Direct Solar Energy to Resistance Heating for Carbothermal Reduction of Regolith
NASA Technical Reports Server (NTRS)
Muscatello, Anthony C.; Gustafson, Robert J.
2011-01-01
A comparison of two methods of delivering thermal energy to regolith for the carbo thermal reduction process has been performed. The comparison concludes that electrical resistance heating is superior to direct solar energy via solar concentrators for the following reasons: (1) the resistance heating method can process approximately 12 times as much regolith using the same amount of thermal energy as the direct solar energy method because of superior thermal insulation; (2) the resistance heating method is more adaptable to nearer-term robotic exploration precursor missions because it does not require a solar concentrator system; (3) crucible-based methods are more easily adapted to separation of iron metal and glass by-products than direct solar energy because the melt can be poured directly after processing instead of being remelted; and (4) even with projected improvements in the mass of solar concentrators, projected photovoltaic system masses are expected to be even lower.
NASA Astrophysics Data System (ADS)
Yang, Bing; Liao, Zhen; Qin, Yahang; Wu, Yayun; Liang, Sai; Xiao, Shoune; Yang, Guangwu; Zhu, Tao
2017-05-01
To describe the complicated nonlinear process of the fatigue short crack evolution behavior, especially the change of the crack propagation rate, two different calculation methods are applied. The dominant effective short fatigue crack propagation rates are calculated based on the replica fatigue short crack test with nine smooth funnel-shaped specimens and the observation of the replica films according to the effective short fatigue cracks principle. Due to the fast decay and the nonlinear approximation ability of wavelet analysis, the self-learning ability of neural network, and the macroscopic searching and global optimization of genetic algorithm, the genetic wavelet neural network can reflect the implicit complex nonlinear relationship when considering multi-influencing factors synthetically. The effective short fatigue cracks and the dominant effective short fatigue crack are simulated and compared by the Genetic Wavelet Neural Network. The simulation results show that Genetic Wavelet Neural Network is a rational and available method for studying the evolution behavior of fatigue short crack propagation rate. Meanwhile, a traditional data fitting method for a short crack growth model is also utilized for fitting the test data. It is reasonable and applicable for predicting the growth rate. Finally, the reason for the difference between the prediction effects by these two methods is interpreted.
New active substances authorized in the United Kingdom between 1972 and 1994
Jefferys, David B; Leakey, Diane; Lewis, John A; Payne, Sandra; Rawlins, Michael D
1998-01-01
Aims The study was undertaken to assemble a list of all new active medicinal substances authorised in the United Kingdom between 1972 and 1994; to assess whether the pattern of introductions had changed; and to examine withdrawal rates and the reasons for withdrawal. Methods The identities of those new active substances whose manufacturers had obtained Product Licences between 1972 and 1994 were sought from the Medicines Control Agency's product data-base. For each substance relevant information was retrieved including the year of granting the Product Licence, its therapeutic class, whether currently authorised (and, if not, reason for withdrawal), and its nature (chemical, biological etc.). Results The Medicines Control Agency's data-base was cross-checked against two other data-bases for completeness. A total of 583 new active substances (in 579 products) were found to have been authorised over the study period. The annual rates of authorisation varied widely (9 to 40 per year). Whilst there was no evidence for any overall change in the annual rates of authorising new chemical entities, there has been a trend for increasing numbers of new products of biological origin to be authorised in recent years. Fifty-nine of the 583 new active substances have been withdrawn (1 each for quality and efficacy, 22 for safety, and 35 for commercial reasons). Conclusions For reasons that are unclear there is marked heterogeneity in the annual rates of authorisation of new active substances. Their 10 year survival is approximately 88% with withdrawals being, predominantly, for commercial or safety reasons. This confirms the provisional nature of assessments about safety at the time when a new active substance is introduced into routine clinical practice, and emphasises the importance of pharmacovigilance. PMID:9491828
Temporal Large-Eddy Simulation
NASA Technical Reports Server (NTRS)
Pruett, C. D.; Thomas, B. C.
2004-01-01
In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.
Module Extraction for Efficient Object Queries over Ontologies with Large ABoxes
Xu, Jia; Shironoshita, Patrick; Visser, Ubbo; John, Nigel; Kabuka, Mansur
2015-01-01
The extraction of logically-independent fragments out of an ontology ABox can be useful for solving the tractability problem of querying ontologies with large ABoxes. In this paper, we propose a formal definition of an ABox module, such that it guarantees complete preservation of facts about a given set of individuals, and thus can be reasoned independently w.r.t. the ontology TBox. With ABox modules of this type, isolated or distributed (parallel) ABox reasoning becomes feasible, and more efficient data retrieval from ontology ABoxes can be attained. To compute such an ABox module, we present a theoretical approach and also an approximation for SHIQ ontologies. Evaluation of the module approximation on different types of ontologies shows that, on average, extracted ABox modules are significantly smaller than the entire ABox, and the time for ontology reasoning based on ABox modules can be improved significantly. PMID:26848490
NASA Astrophysics Data System (ADS)
Tao, Guohua
2017-07-01
A general theoretical framework is derived for the recently developed multi-state trajectory (MST) approach from the time dependent Schrödinger equation, resulting in equations of motion for coupled nuclear-electronic dynamics equivalent to Hamilton dynamics or Heisenberg equation based on a new multistate Meyer-Miller (MM) model. The derived MST formalism incorporates both diabatic and adiabatic representations as limiting cases and reduces to Ehrenfest or Born-Oppenheimer dynamics in the mean-field or the single-state limits, respectively. In the general multistate formalism, nuclear dynamics is represented in terms of a set of individual state-specific trajectories, while in the active state trajectory (AST) approximation, only one single nuclear trajectory on the active state is propagated with its augmented images running on all other states. The AST approximation combines the advantages of consistent nuclear-coupled electronic dynamics in the MM model and the single nuclear trajectory in the trajectory surface hopping (TSH) treatment and therefore may provide a potential alternative to both Ehrenfest and TSH methods. The resulting algorithm features in a consistent description of coupled electronic-nuclear dynamics and excellent numerical stability. The implementation of the MST approach to several benchmark systems involving multiple nonadiabatic transitions and conical intersection shows reasonably good agreement with exact quantum calculations, and the results in both representations are similar in accuracy. The AST treatment also reproduces the exact results reasonably, sometimes even quantitatively well, with a better performance in the adiabatic representation.
PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.
Xia, Jing; Wang, Michelle Yongmei
Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.
Weeks, James L
2006-06-01
The Mine Safety and Health Administration (MSHA) proposes to issue citations for non-compliance with the exposure limit for respirable coal mine dust when measured exposure exceeds the exposure limit with a "high degree of confidence." This criterion threshold value (CTV) is derived from the sampling and analytical error of the measurement method. This policy is based on a combination of statistical and legal reasoning: the one-tailed 95% confidence limit of the sampling method, the apparent principle of due process and a standard of proof analogous to "beyond a reasonable doubt." This policy raises the effective exposure limit, it is contrary to the precautionary principle, it is not a fair sharing of the burden of uncertainty, and it employs an inappropriate standard of proof. Its own advisory committee and NIOSH have advised against this policy. For longwall mining sections, it results in a failure to issue citations for approximately 36% of the measured values that exceed the statutory exposure limit. Citations for non-compliance with the respirable dust standard should be issued for any measure exposure that exceeds the exposure limit.
Development of a polysilicon process based on chemical vapor deposition, phase 1 and phase 2
NASA Technical Reports Server (NTRS)
Plahutnik, F.; Arvidson, A.; Sawyer, D.; Sharp, K.
1982-01-01
High-purity polycrystalline silicon was produced in an experimental, intermediate and advanced CVD reactor. Data from the intermediate and advanced reactors confirmed earlier results obtained in the experimental reactor. Solar cells were fabricated by Westinghouse Electric and Applied Solar Research Corporation which met or exceeded baseline cell efficiencies. Feedstocks containing trichlorosilane or silicon tetrachloride are not viable as etch promoters to reduce silicon deposition on bell jars. Neither are they capable of meeting program goals for the 1000 MT/yr plant. Post-run CH1 etch was found to be a reasonably effective method of reducing silicon deposition on bell jars. Using dichlorosilane as feedstock met the low-cost solar array deposition goal (2.0 gh-1-cm-1), however, conversion efficiency was approximately 10% lower than the targeted value of 40 mole percent (32 to 36% achieved), and power consumption was approximately 20 kWh/kg over target at the reactor.
Using new aggregation operators in rule-based intelligent control
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Chen, Yung-Yaw; Yager, Ronald R.
1990-01-01
A new aggregation operator is applied in the design of an approximate reasoning-based controller. The ordered weighted averaging (OWA) operator has the property of lying between the And function and the Or function used in previous fuzzy set reasoning systems. It is shown here that, by applying OWA operators, more generalized types of control rules, which may include linguistic quantifiers such as Many and Most, can be developed. The new aggregation operators, as tested in a cart-pole balancing control problem, illustrate improved performance when compared with existing fuzzy control aggregation schemes.
NASA Astrophysics Data System (ADS)
Liu, Jian; Ren, Zhongzhou; Xu, Chang
2018-07-01
Combining the modified Skyrme-like model and the local density approximation model, the slope parameter L of symmetry energy is extracted from the properties of finite nuclei with an improved iterative method. The calculations of the iterative method are performed within the framework of the spherical symmetry. By choosing 200 neutron rich nuclei on 25 isotopic chains as candidates, the slope parameter is constrained to be 50 MeV < L < 62 MeV. The validity of this method is examined by the properties of finite nuclei. Results show that reasonable descriptions on the properties of finite nuclei and nuclear matter can be obtained together.
NASA Astrophysics Data System (ADS)
Gallup, G. A.; Gerratt, J.
1985-09-01
The van der Waals energy between the two parts of a system is a very small fraction of the total electronic energy. In such cases, calculations have been based on perturbation theory. However, such an approach involves certain difficulties. For this reason, van der Waals energies have also been directly calculated from total energies. But such a method has definite limitations as to the size of systems which can be treated, and recently ab initio calculations have been combined with damped semiempirical long-range dispersion potentials to treat larger systems. In this procedure, large basis set superposition errors occur, which must be removed by the counterpoise method. The present investigation is concerned with an approach which is intermediate between the previously considered procedures. The first step in the new approach involves a variational calculation based upon valence bond functions. The procedure includes also the optimization of excited orbitals, and an approximation of atomic integrals and Hamiltonian matrix elements.
Learning and tuning fuzzy logic controllers through reinforcements
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap
1992-01-01
A new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. In particular, our Generalized Approximate Reasoning-based Intelligent Control (GARIC) architecture: (1) learns and tunes a fuzzy logic controller even when only weak reinforcements, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto, Sutton, and Anderson to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and has demonstrated significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.
Kärkkäinen, Salme; Lantuéjoul, Christian
2007-10-01
We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe
2013-01-01
This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.
NASA Astrophysics Data System (ADS)
Khatami, Ehsan; Macridin, Alexandru; Jarrell, Mark
2008-03-01
Recently, several authors have employed the ``glue" approximation for the Cuprates in which the full pairing vertex is approximated by the spin susceptibility. We study this approximation using Quantum Monte Carlo Dynamical Cluster Approximation methods on a 2D Hubbard model. By considering a reasonable finite value for the next nearest neighbor hopping, we find that this ``glue" approximation, in the current form, does not capture the correct pairing symmetry. Here, d-wave is not the leading pairing symmetry while it is the dominant symmetry using the ``exact" QMC results. We argue that the sensitivity of this approximation to the band structure changes leads to this inconsistency and that this form of interaction may not be the appropriate description of the pairing mechanism in Cuprates. We suggest improvements to this approximation which help to capture the the essential features of the QMC data.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410
Conservative Analytical Collision Probabilities for Orbital Formation Flying
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2004-01-01
The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.
Conservative Analytical Collision Probability for Design of Orbital Formations
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell
2004-01-01
The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.
DOT National Transportation Integrated Search
1992-08-26
This document provides the basic information needed to estimate a general : probability of collision in Low Earth Orbit (LEO). Although the method : described in this primer is a first order approximation, its results are : reasonable. Furthermore, t...
An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1994-01-01
This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.
In defence of model-based inference in phylogeography
Beaumont, Mark A.; Nielsen, Rasmus; Robert, Christian; Hey, Jody; Gaggiotti, Oscar; Knowles, Lacey; Estoup, Arnaud; Panchal, Mahesh; Corander, Jukka; Hickerson, Mike; Sisson, Scott A.; Fagundes, Nelson; Chikhi, Lounès; Beerli, Peter; Vitalis, Renaud; Cornuet, Jean-Marie; Huelsenbeck, John; Foll, Matthieu; Yang, Ziheng; Rousset, Francois; Balding, David; Excoffier, Laurent
2017-01-01
Recent papers have promoted the view that model-based methods in general, and those based on Approximate Bayesian Computation (ABC) in particular, are flawed in a number of ways, and are therefore inappropriate for the analysis of phylogeographic data. These papers further argue that Nested Clade Phylogeographic Analysis (NCPA) offers the best approach in statistical phylogeography. In order to remove the confusion and misconceptions introduced by these papers, we justify and explain the reasoning behind model-based inference. We argue that ABC is a statistically valid approach, alongside other computational statistical techniques that have been successfully used to infer parameters and compare models in population genetics. We also examine the NCPA method and highlight numerous deficiencies, either when used with single or multiple loci. We further show that the ages of clades are carelessly used to infer ages of demographic events, that these ages are estimated under a simple model of panmixia and population stationarity but are then used under different and unspecified models to test hypotheses, a usage the invalidates these testing procedures. We conclude by encouraging researchers to study and use model-based inference in population genetics. PMID:29284924
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
Accurate and Efficient Approximation to the Optimized Effective Potential for Exchange
NASA Astrophysics Data System (ADS)
Ryabinkin, Ilya G.; Kananenka, Alexei A.; Staroverov, Viktor N.
2013-07-01
We devise an efficient practical method for computing the Kohn-Sham exchange-correlation potential corresponding to a Hartree-Fock electron density. This potential is almost indistinguishable from the exact-exchange optimized effective potential (OEP) and, when used as an approximation to the OEP, is vastly better than all existing models. Using our method one can obtain unambiguous, nearly exact OEPs for any reasonable finite one-electron basis set at the same low cost as the Krieger-Li-Iafrate and Becke-Johnson potentials. For all practical purposes, this solves the long-standing problem of black-box construction of OEPs in exact-exchange calculations.
Sadybekov, Arman; Krylov, Anna I.
2017-07-07
A theoretical approach for calculating core-level states in condensed phase is presented. The approach is based on equation-of-motion coupled-cluster theory (EOMCC) and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we address poor convergence issues that are encountered for the core-level states and significantly reduce computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to reference systems, are reproduced reasonably well. By using different protonation formsmore » of solvated glycine as a benchmark system, we show that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy.« less
Efficient Posterior Probability Mapping Using Savage-Dickey Ratios
Penny, William D.; Ridgway, Gerard R.
2013-01-01
Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadybekov, Arman; Krylov, Anna I.
A theoretical approach for calculating core-level states in condensed phase is presented. The approach is based on equation-of-motion coupled-cluster theory (EOMCC) and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we address poor convergence issues that are encountered for the core-level states and significantly reduce computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to reference systems, are reproduced reasonably well. By using different protonation formsmore » of solvated glycine as a benchmark system, we show that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy.« less
An n -material thresholding method for improving integerness of solutions in topology optimization
Watts, Seth; Tortorelli, Daniel A.
2016-04-10
It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, themore » canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.« less
Fuzziness In Approximate And Common-Sense Reasoning In Knowledge-Based Robotics Systems
NASA Astrophysics Data System (ADS)
Dodds, David R.
1987-10-01
Fuzzy functions, a major key to inexact reasoning, are described as they are applied to the fuzzification of robot co-ordinate systems. Linguistic-variables, a means of labelling ranges in fuzzy sets, are used as computationally pragmatic means of representing spatialization metaphors, themselves an extraordinarily rich basis for understanding concepts in orientational terms. Complex plans may be abstracted and simplified in a system which promotes conceptual planning by means of the orientational representation.
Case-based reasoning in design: An apologia
NASA Technical Reports Server (NTRS)
Pulaski, Kirt
1990-01-01
Three positions are presented and defended: the process of generating solutions in problem solving is viewable as a design task; case-based reasoning is a strong method of problem solving; and a synergism exists between case-based reasoning and design problem solving.
Effects of Inquiry-Based Agriscience Instruction on Student Scientific Reasoning
ERIC Educational Resources Information Center
Thoron, Andrew C.; Myers, Brian E.
2012-01-01
The purpose of this study was to determine the effect of inquiry-based agriscience instruction on student scientific reasoning. Scientific reasoning is defined as the use of the scientific method, inductive, and deductive reasoning to develop and test hypothesis. Developing scientific reasoning skills can provide learners with a connection to the…
Monte Carlo simulations of medical imaging modalities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estes, G.P.
Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computermore » power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.« less
Volterra integral equation-factorisation method and nucleus-nucleus elastic scattering
NASA Astrophysics Data System (ADS)
Laha, U.; Majumder, M.; Bhoi, J.
2018-04-01
An approximate solution for the nuclear Hulthén plus atomic Hulthén potentials is constructed by solving the associated Volterra integral equation by series substitution method. Within the framework of supersymmetry-inspired factorisation method, this solution is exploited to construct higher partial wave interactions. The merit of our approach is examined by computing elastic scattering phases of the α {-}α system by the judicious use of phase function method. Reasonable agreements in phase shifts are obtained with standard data.
NASA Astrophysics Data System (ADS)
Christian, Karen Jeanne
2011-12-01
Students often use study groups to prepare for class or exams; yet to date, we know very little about how these groups actually function. This study looked at the ways in which undergraduate organic chemistry students prepared for exams through self-initiated study groups. We sought to characterize the methods of social regulation, levels of content processing, and types of reasoning processes used by students within their groups. Our analysis showed that groups engaged in predominantly three types of interactions when discussing chemistry content: co-construction, teaching, and tutoring. Although each group engaged in each of these types of interactions at some point, their prevalence varied between groups and group members. Our analysis suggests that the types of interactions that were most common depended on the relative content knowledge of the group members as well as on the difficulty of the tasks in which they were engaged. Additionally, we were interested in characterizing the reasoning methods used by students within their study groups. We found that students used a combination of three content-relevant methods of reasoning: model-based reasoning, case-based reasoning, or rule-based reasoning, in conjunction with one chemically-irrelevant method of reasoning: symbol-based reasoning. The most common way for groups to reason was to use rules, whereas the least common way was for students to work from a model. In general, student reasoning correlated strongly to the subject matter to which students were paying attention, and was only weakly related to student interactions. Overall, results from this study may help instructors to construct appropriate tasks to guide what and how students study outside of the classroom. We found that students had a decidedly strategic approach in their study groups, relying heavily on material provided by their instructors, and using the reasoning strategies that resulted in the lowest levels of content processing. We suggest that instructors create more opportunities for students to explore model-based reasoning, and to create opportunities for students to be able to co-construct in a collaborative manner within the context of their organic chemistry course.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaefer, Bastian; Goedecker, Stefan, E-mail: stefan.goedecker@unibas.ch
2016-07-21
An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This methodmore » allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.« less
Probability Elicitation Under Severe Time Pressure: A Rank-Based Method.
Jaspersen, Johannes G; Montibeller, Gilberto
2015-07-01
Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio-scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low-probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real-world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats. © 2015 Society for Risk Analysis.
A diffusion approximation for ocean wave scatterings by randomly distributed ice floes
NASA Astrophysics Data System (ADS)
Zhao, Xin; Shen, Hayley
2016-11-01
This study presents a continuum approach using a diffusion approximation method to solve the scattering of ocean waves by randomly distributed ice floes. In order to model both strong and weak scattering, the proposed method decomposes the wave action density function into two parts: the transmitted part and the scattered part. For a given wave direction, the transmitted part of the wave action density is defined as the part of wave action density in the same direction before the scattering; and the scattered part is a first order Fourier series approximation for the directional spreading caused by scattering. An additional approximation is also adopted for simplification, in which the net directional redistribution of wave action by a single scatterer is assumed to be the reflected wave action of a normally incident wave into a semi-infinite ice cover. Other required input includes the mean shear modulus, diameter and thickness of ice floes, and the ice concentration. The directional spreading of wave energy from the diffusion approximation is found to be in reasonable agreement with the previous solution using the Boltzmann equation. The diffusion model provides an alternative method to implement wave scattering into an operational wave model.
Gaussian representation of high-intensity focused ultrasound beams.
Soneson, Joshua E; Myers, Matthew R
2007-11-01
A method for fast numerical simulation of high-intensity focused ultrasound beams is derived. The method is based on the frequency-domain representation of the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation, and assumes for each harmonic a Gaussian transverse pressure distribution at all distances from the transducer face. The beamwidths of the harmonics are constrained to vary inversely with the square root of the harmonic number, and as such this method may be viewed as an extension of a quasilinear approximation. The technique is capable of determining pressure or intensity fields of moderately nonlinear high-intensity focused ultrasound beams in water or biological tissue, usually requiring less than a minute of computer time on a modern workstation. Moreover, this method is particularly well suited to high-gain simulations since, unlike traditional finite-difference methods, it is not subject to resolution limitations in the transverse direction. Results are shown to be in reasonable agreement with numerical solutions of the full KZK equation in both tissue and water for moderately nonlinear beams.
Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai
2015-02-01
Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.
Elliptical optical solitary waves in a finite nematic liquid crystal cell
NASA Astrophysics Data System (ADS)
Minzoni, Antonmaria A.; Sciberras, Luke W.; Smyth, Noel F.; Worthy, Annette L.
2015-05-01
The addition of orbital angular momentum has been previously shown to stabilise beams of elliptic cross-section. In this article the evolution of such elliptical beams is explored through the use of an approximate methodology based on modulation theory. An approximate method is used as the equations that govern the optical system have no known exact solitary wave solution. This study brings to light two distinct phases in the evolution of a beam carrying orbital angular momentum. The two phases are determined by the shedding of radiation in the form of mass loss and angular momentum loss. The first phase is dominated by the shedding of angular momentum loss through spiral waves. The second phase is dominated by diffractive radiation loss which drives the elliptical solitary wave to a steady state. In addition to modulation theory, the "chirp" variational method is also used to study this evolution. Due to the significant role radiation loss plays in the evolution of an elliptical solitary wave, an attempt is made to couple radiation loss to the chirp variational method. This attempt furthers understanding as to why radiation loss cannot be coupled to the chirp method. The basic reason for this is that there is no consistent manner to match the chirp trial function to the generated radiating waves which is uniformly valid in time. Finally, full numerical solutions of the governing equations are compared with solutions obtained using the various variational approximations, with the best agreement achieved with modulation theory due to its ability to include both mass and angular momentum losses to shed diffractive radiation.
An optimal design of wind turbine and ship structure based on neuro-response surface method
NASA Astrophysics Data System (ADS)
Lee, Jae-Chul; Shin, Sung-Chul; Kim, Soo-Young
2015-07-01
The geometry of engineering systems affects their performances. For this reason, the shape of engineering systems needs to be optimized in the initial design stage. However, engineering system design problems consist of multi-objective optimization and the performance analysis using commercial code or numerical analysis is generally time-consuming. To solve these problems, many engineers perform the optimization using the approximation model (response surface). The Response Surface Method (RSM) is generally used to predict the system performance in engineering research field, but RSM presents some prediction errors for highly nonlinear systems. The major objective of this research is to establish an optimal design method for multi-objective problems and confirm its applicability. The proposed process is composed of three parts: definition of geometry, generation of response surface, and optimization process. To reduce the time for performance analysis and minimize the prediction errors, the approximation model is generated using the Backpropagation Artificial Neural Network (BPANN) which is considered as Neuro-Response Surface Method (NRSM). The optimization is done for the generated response surface by non-dominated sorting genetic algorithm-II (NSGA-II). Through case studies of marine system and ship structure (substructure of floating offshore wind turbine considering hydrodynamics performances and bulk carrier bottom stiffened panels considering structure performance), we have confirmed the applicability of the proposed method for multi-objective side constraint optimization problems.
Experimental validation of a quasi-steady theory for the flow through the glottis
NASA Astrophysics Data System (ADS)
Vilain, C. E.; Pelorson, X.; Fraysse, C.; Deverge, M.; Hirschberg, A.; Willems, J.
2004-09-01
In this paper a theoretical description of the flow through the glottis based on a quasi-steady boundary layer theory is presented. The Thwaites method is used to solve the von Kármán equations within the boundary layers. In practice this makes the theory much easier to use compared to Pohlhausen's polynomial approximations. This theoretical description is evaluated on the basis of systematic comparison with experimental data obtained under steady flow or unsteady (oscillating) flow without and with moving vocal folds. Results tend to show that the theory reasonably explains the measured data except when unsteady or viscous terms become predominant. This happens particularly during the collision of the vocal folds.
Gas Evolution Dynamics in Godunov-Type Schemes and Analysis of Numerical Shock Instability
NASA Technical Reports Server (NTRS)
Xu, Kun
1999-01-01
In this paper we are going to study the gas evolution dynamics of the exact and approximate Riemann solvers, e.g., the Flux Vector Splitting (FVS) and the Flux Difference Splitting (FDS) schemes. Since the FVS scheme and the Kinetic Flux Vector Splitting (KFVS) scheme have the same physical mechanism and similar flux function, based on the analysis of the discretized KFVS scheme the weakness and advantage of the FVS scheme are closely observed. The subtle dissipative mechanism of the Godunov method in the 2D case is also analyzed, and the physical reason for shock instability, i.e., carbuncle phenomena and odd-even decoupling, is presented.
NASA Astrophysics Data System (ADS)
Orlova, A. G.; Kirillin, M. Yu.; Volovetsky, A. B.; Shilyagina, N. Yu.; Sergeeva, E. A.; Golubiatnikov, G. Yu.; Turchin, I. V.
2017-07-01
Using diffuse optical spectroscopy the level of oxygenation and hemoglobin concentration in experimental tumor in comparison with normal muscle tissue of mice have been studied. Subcutaneously growing SKBR-3 was used as a tumor model. Continuous wave fiber probe diffuse optical spectroscopy system was employed. Optical properties extraction approach was based on diffusion approximation. Decreased blood oxygen saturation level and increased total hemoglobin content were demonstrated in the neoplasm. The main reason of such differences between tumor and norm was significant elevation of deoxyhemoglobin concentration in SKBR-3. The method can be useful for diagnosis of tumors as well as for study of blood flow parameters of tumor models with different angiogenic properties.
Investigating Students' Reasoning about Acid-Base Reactions
ERIC Educational Resources Information Center
Cooper, Melanie M.; Kouyoumdjian, Hovig; Underwood, Sonia M.
2016-01-01
Acid-base chemistry is central to a wide range of reactions. If students are able to understand how and why acid-base reactions occur, it should provide a basis for reasoning about a host of other reactions. Here, we report the development of a method to characterize student reasoning about acid-base reactions based on their description of…
Using fuzzy logic to integrate neural networks and knowledge-based systems
NASA Technical Reports Server (NTRS)
Yen, John
1991-01-01
Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.
Nozzle Free Jet Flows Within the Strong Curved Shock Regime
NASA Technical Reports Server (NTRS)
Shih, Tso-Shin
1975-01-01
A study based on inviscid analysis was conducted to examine the flow field produced from a convergent-divergent nozzle when a strong curved shock occurs. It was found that a certain constraint is imposed on the flow solution of the problem which is the unique feature of the flow within this flow regime, and provides the reason why the inverse method of calculation cannot be employed for these problems. An approximate method was developed to calculate the flow field, and results were obtained for two-dimensional flows. Analysis and calculations were performed for flows with axial symmetry. It is shown that under certain conditions, the vorticity generated at the jet boundary may become infinite and the viscous effect becomes important. Under other conditions, the asymptotic free jet height as well as the corresponding shock geometry were determined.
An estimation of distribution method for infrared target detection based on Copulas
NASA Astrophysics Data System (ADS)
Wang, Shuo; Zhang, Yiqun
2015-10-01
Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.
Registration of organs with sliding interfaces and changing topologies
NASA Astrophysics Data System (ADS)
Berendsen, Floris F.; Kotte, Alexis N. T. J.; Viergever, Max A.; Pluim, Josien P. W.
2014-03-01
Smoothness and continuity assumptions on the deformation field in deformable image registration do not hold for applications where the imaged objects have sliding interfaces. Recent extensions to deformable image registration that accommodate for sliding motion of organs are limited to sliding motion along approximately planar surfaces or cannot model sliding that changes the topological configuration in case of multiple organs. We propose a new extension to free-form image registration that is not limited in this way. Our method uses a transformation model that consists of uniform B-spline transformations for each organ region separately, which is based on segmentation of one image. Since this model can create overlapping regions or gaps between regions, we introduce a penalty term that minimizes this undesired effect. The penalty term acts on the surfaces of the organ regions and is optimized simultaneously with the image similarity. To evaluate our method registrations were performed on publicly available inhale-exhale CT scans for which performances of other methods are known. Target registration errors are computed on dense landmark sets that are available with these datasets. On these data our method outperforms the other methods in terms of target registration error and, where applicable, also in terms of overlap and gap volumes. The approximation of the other methods of sliding motion along planar surfaces is reasonably well suited for the motion present in the lung data. The ability of our method to handle sliding along curved boundaries and for changing region topology configurations was demonstrated on synthetic images.
2013-01-01
Locked Nucleic Acids (LNAs) are RNA analogues with an O2′-C4′ methylene bridge which locks the sugar into a C3′-endo conformation. This enhances hybridization to DNA and RNA, making LNAs useful in microarrays and potential therapeutics. Here, the LNA, L(CAAU), provides a simplified benchmark for testing the ability of molecular dynamics (MD) to approximate nucleic acid properties. LNA χ torsions and partial charges were parametrized to create AMBER parm99_LNA. The revisions were tested by comparing MD predictions with AMBER parm99 and parm99_LNA against a 200 ms NOESY NMR spectrum of L(CAAU). NMR indicates an A-Form equilibrium ensemble. In 3000 ns simulations starting with an A-form structure, parm99_LNA and parm99 provide 66% and 35% agreement, respectively, with NMR NOE volumes and 3J-couplings. In simulations of L(CAAU) starting with all χ torsions in a syn conformation, only parm99_LNA is able to repair the structure. This implies methods for parametrizing force fields for nucleic acid mimics can reasonably approximate key interactions and that parm99_LNA will improve reliability of MD studies for systems with LNA. A method for approximating χ population distribution on the basis of base to sugar NOEs is also introduced. PMID:24377321
NASA Astrophysics Data System (ADS)
Rezaeian, P.; Ataenia, V.; Shafiei, S.
2017-12-01
In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.
A DEIM Induced CUR Factorization
2015-09-18
CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given matrix A, such a factorization provides a...CUR approximations based on leverage scores. 1 Introduction This work presents a new CUR matrix factorization based upon the Discrete Empirical...SUPPLEMENTARY NOTES 14. ABSTRACT We derive a CUR approximate matrix factorization based on the Discrete Empirical Interpolation Method (DEIM). For a given
The Torsion of Members Having Sections Common in Aircraft Construction
NASA Technical Reports Server (NTRS)
Trayer, George W; March, H W
1930-01-01
Within recent years a great variety of approximate torsion formulas and drafting-room processes have been advocated. In some of these, especially where mathematical considerations are involved, the results are extremely complex and are not generally intelligible to engineers. The principal object of this investigation was to determine by experiment and theoretical investigation how accurate the more common of these formulas are and on what assumptions they are founded and, if none of the proposed methods proved to be reasonable accurate in practice, to produce simple, practical formulas from reasonably correct assumptions, backed by experiment. A second object was to collect in readily accessible form the most useful of known results for the more common sections. Formulas for all the important solid sections that have yielded to mathematical treatment are listed. Then follows a discussion of the torsion of tubular rods with formulas both rigorous and approximate.
NASA Astrophysics Data System (ADS)
Yu, W.; Gao, C.-Z.; Zhang, Y.; Zhang, F. S.; Hutton, R.; Zou, Y.; Wei, B.
2018-03-01
We calculate electron capture and ionization cross sections of N2 impacted by the H+ projectile at keV energies. To this end, we employ the time-dependent density-functional theory coupled nonadiabatically to molecular dynamics. To avoid the explicit treatment of the complex density matrix in the calculation of cross sections, we propose an approximate method based on the assumption of constant ionization rate over the period of the projectile passing the absorbing boundary. Our results agree reasonably well with experimental data and semi-empirical results within the measurement uncertainties in the considered energy range. The discrepancies are mainly attributed to the inadequate description of exchange-correlation functional and the crude approximation for constant ionization rate. Although the present approach does not predict the experiments quantitatively for collision energies below 10 keV, it is still helpful to calculate total cross sections of ion-molecule collisions within a certain energy range.
ElMasry, Gamal; Nakauchi, Shigeki
2016-03-01
A simulation method for approximating spectral signatures of minced meat samples was developed depending on concentrations and optical properties of the major chemical constituents. Minced beef samples of different compositions scanned on a near-infrared spectroscopy and on a hyperspectral imaging system were examined. Chemical composition determined heuristically and optical properties collected from authenticated references were simulated to approximate samples' spectral signatures. In short-wave infrared range, the resulting spectrum equals the sum of the absorption of three individual absorbers, that is, water, protein, and fat. By assuming homogeneous distributions of the main chromophores in the mince samples, the obtained absorption spectra are found to be a linear combination of the absorption spectra of the major chromophores present in the sample. Results revealed that developed models were good enough to derive spectral signatures of minced meat samples with a reasonable level of robustness of a high agreement index value more than 0.90 and ratio of performance to deviation more than 1.4.
A numerical and experimental study on the nonlinear evolution of long-crested irregular waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goullet, Arnaud; Choi, Wooyoung; Division of Ocean Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 305-701
2011-01-15
The spatial evolution of nonlinear long-crested irregular waves characterized by the JONSWAP spectrum is studied numerically using a nonlinear wave model based on a pseudospectral (PS) method and the modified nonlinear Schroedinger (MNLS) equation. In addition, new laboratory experiments with two different spectral bandwidths are carried out and a number of wave probe measurements are made to validate these two wave models. Strongly nonlinear wave groups are observed experimentally and their propagation and interaction are studied in detail. For the comparison with experimental measurements, the two models need to be initialized with care and the initialization procedures are described. Themore » MNLS equation is found to approximate reasonably well for the wave fields with a relatively smaller Benjamin-Feir index, but the phase error increases as the propagation distance increases. The PS model with different orders of nonlinear approximation is solved numerically, and it is shown that the fifth-order model agrees well with our measurements prior to wave breaking for both spectral bandwidths.« less
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1984-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Teglas, Russell
1987-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
NASA Astrophysics Data System (ADS)
Hooshyar, M.; Wang, D.
2016-12-01
The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: 1) the soil is saturated at the land surface; and 2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.
NASA Astrophysics Data System (ADS)
Hooshyar, Milad; Wang, Dingbao
2016-08-01
The empirical proportionality relationship, which indicates that the ratio of cumulative surface runoff and infiltration to their corresponding potentials are equal, is the basis of the extensively used Soil Conservation Service Curve Number (SCS-CN) method. The objective of this paper is to provide the physical basis of the SCS-CN method and its proportionality hypothesis from the infiltration excess runoff generation perspective. To achieve this purpose, an analytical solution of Richards' equation is derived for ponded infiltration in shallow water table environment under the following boundary conditions: (1) the soil is saturated at the land surface; and (2) there is a no-flux boundary which moves downward. The solution is established based on the assumptions of negligible gravitational effect, constant soil water diffusivity, and hydrostatic soil moisture profile between the no-flux boundary and water table. Based on the derived analytical solution, the proportionality hypothesis is a reasonable approximation for rainfall partitioning at the early stage of ponded infiltration in areas with a shallow water table for coarse textured soils.
Learning and tuning fuzzy logic controllers through reinforcements
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap
1992-01-01
This paper presents a new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system. In particular, our generalized approximate reasoning-based intelligent control (GARIC) architecture (1) learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward neural network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto et al. (1983) to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
FBC: a flat binary code scheme for fast Manhattan hash retrieval
NASA Astrophysics Data System (ADS)
Kong, Yan; Wu, Fuzhang; Gao, Lifa; Wu, Yanjun
2018-04-01
Hash coding is a widely used technique in approximate nearest neighbor (ANN) search, especially in document search and multimedia (such as image and video) retrieval. Based on the difference of distance measurement, hash methods are generally classified into two categories: Hamming hashing and Manhattan hashing. Benefitting from better neighborhood structure preservation, Manhattan hashing methods outperform earlier methods in search effectiveness. However, due to using decimal arithmetic operations instead of bit operations, Manhattan hashing becomes a more time-consuming process, which significantly decreases the whole search efficiency. To solve this problem, we present an intuitive hash scheme which uses Flat Binary Code (FBC) to encode the data points. As a result, the decimal arithmetic used in previous Manhattan hashing can be replaced by more efficient XOR operator. The final experiments show that with a reasonable memory space growth, our FBC speeds up more than 80% averagely without any search accuracy loss when comparing to the state-of-art Manhattan hashing methods.
A simple orbit-attitude coupled modelling method for large solar power satellites
NASA Astrophysics Data System (ADS)
Li, Qingjun; Wang, Bo; Deng, Zichen; Ouyang, Huajiang; Wei, Yi
2018-04-01
A simple modelling method is proposed to study the orbit-attitude coupled dynamics of large solar power satellites based on natural coordinate formulation. The generalized coordinates are composed of Cartesian coordinates of two points and Cartesian components of two unitary vectors instead of Euler angles and angular velocities, which is the reason for its simplicity. Firstly, in order to develop natural coordinate formulation to take gravitational force and gravity gradient torque of a rigid body into account, Taylor series expansion is adopted to approximate the gravitational potential energy. The equations of motion are constructed through constrained Hamilton's equations. Then, an energy- and constraint-conserving algorithm is presented to solve the differential-algebraic equations. Finally, the proposed method is applied to simulate the orbit-attitude coupled dynamics and control of a large solar power satellite considering gravity gradient torque and solar radiation pressure. This method is also applicable to dynamic modelling of other rigid multibody aerospace systems.
Stability-Derivative Determination from Flight Data
NASA Technical Reports Server (NTRS)
Holowicz, Chester H.; Holleman, Euclid C.
1958-01-01
A comprehensive discussion of the various factors affecting the determination of stability and control derivatives from flight data is presented based on the experience of the NASA High-Speed Flight Station. Factors relating to test techniques, determination of mass characteristics, instrumentation, and methods of analysis are discussed. For most longitudinal-stability-derivative analyses simple equations utilizing period and damping have been found to be as satisfactory as more comprehensive methods. The graphical time-vector method has been the basis of lateral-derivative analysis, although simple approximate methods can be useful If applied with caution. Control effectiveness has been generally obtained by relating the peak acceleration to the rapid control input, and consideration must be given to aerodynamic contributions if reasonable accuracy is to be realized.. Because of the many factors involved In the determination of stability derivatives, It is believed that the primary stability and control derivatives are probably accurate to within 10 to 25 percent, depending upon the specific derivative. Static-stability derivatives at low angle of attack show the greatest accuracy.
School-University Partnerships in Action: Concepts, Cases,
ERIC Educational Resources Information Center
Sirotnik, Kenneth A., Ed.; Goodlad, John I., Ed.
A general paradigm for ideal collaboration between schools and universities is proposed. It is based on a mutually collaborative arrangement between equal partners working together to meet self-interests while solving common problems. It is suggested that reasonable approximations to this ideal have great potential to effect significant…
NASA Astrophysics Data System (ADS)
Cheng, Rongjun; Sun, Fengxin; Wei, Qi; Wang, Jufeng
2018-02-01
Space-fractional advection-dispersion equation (SFADE) can describe particle transport in a variety of fields more accurately than the classical models of integer-order derivative. Because of nonlocal property of integro-differential operator of space-fractional derivative, it is very challenging to deal with fractional model, and few have been reported in the literature. In this paper, a numerical analysis of the two-dimensional SFADE is carried out by the element-free Galerkin (EFG) method. The trial functions for the SFADE are constructed by the moving least-square (MLS) approximation. By the Galerkin weak form, the energy functional is formulated. Employing the energy functional minimization procedure, the final algebraic equations system is obtained. The Riemann-Liouville operator is discretized by the Grünwald formula. With center difference method, EFG method and Grünwald formula, the fully discrete approximation schemes for SFADE are established. Comparing with exact results and available results by other well-known methods, the computed approximate solutions are presented in the format of tables and graphs. The presented results demonstrate the validity, efficiency and accuracy of the proposed techniques. Furthermore, the error is computed and the proposed method has reasonable convergence rates in spatial and temporal discretizations.
Galerkin approximation for inverse problems for nonautonomous nonlinear distributed systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Reich, Simeon; Rosen, I. G.
1988-01-01
An abstract framework and convergence theory is developed for Galerkin approximation for inverse problems involving the identification of nonautonomous nonlinear distributed parameter systems. A set of relatively easily verified conditions is provided which are sufficient to guarantee the existence of optimal solutions and their approximation by a sequence of solutions to a sequence of approximating finite dimensional identification problems. The approach is based on the theory of monotone operators in Banach spaces and is applicable to a reasonably broad class of nonlinear distributed systems. Operator theoretic and variational techniques are used to establish a fundamental convergence result. An example involving evolution systems with dynamics described by nonstationary quasilinear elliptic operators along with some applications are presented and discussed.
Hybrid Method for Power Control Simulation of a Single Fluid Plasma Thruster
NASA Astrophysics Data System (ADS)
Jaisankar, S.; Sheshadri, T. S.
2018-05-01
Propulsive plasma flow through a cylindrical-conical diverging thruster is simulated by a power controlled hybrid method to obtain the basic flow, thermodynamic and electromagnetic variables. Simulation is based on a single fluid model with electromagnetics being described by the equations of potential Poisson, Maxwell and the Ohm's law while the compressible fluid dynamics by the Navier Stokes in cylindrical form. The proposed method solved the electromagnetics and fluid dynamics separately, both to segregate the two prominent scales for an efficient computation and for the delivery of voltage controlled rated power. The magnetic transport is solved for steady state while fluid dynamics is allowed to evolve in time along with an electromagnetic source using schemes based on generalized finite difference discretization. The multistep methodology with power control is employed for simulating fully ionized propulsive flow of argon plasma through the thruster. Numerical solution shows convergence of every part of the solver including grid stability causing the multistep hybrid method to converge for a rated power delivery. Simulation results are reasonably in agreement with the reported physics of plasma flow in the thruster thus indicating the potential utility of this hybrid computational framework, especially when single fluid approximation of plasma is relevant.
REASONS FOR ELECTRONIC CIGARETTE USE BEYOND CIGARETTE SMOKING CESSATION: A CONCEPT MAPPING APPROACH
Soule, Eric K.; Rosas, Scott R.; Nasim, Aashir
2016-01-01
Introduction Electronic cigarettes (ECIGs) continue to grow in popularity, however, limited research has examined reasons for ECIG use. Methods This study used an integrated, mixed-method participatory research approach called concept mapping (CM) to characterize and describe adults’ reasons for using ECIGs. A total of 108 adults completed a multi-module online CM study that consisted of brainstorming statements about their reasons for ECIG use, sorting each statement into conceptually similar categories, and then rating each statement based on whether it represented a reason why they have used an ECIG in the past month. Results Participants brainstormed a total of 125 unique statements related to their reasons for ECIG use. Multivariate analyses generated a map revealing 11, interrelated components or domains that characterized their reasons for use. Importantly, reasons related to Cessation Methods, Perceived Health Benefits, Private Regard, Convenience and Conscientiousness were rated significantly higher than other categories/types of reasons related to ECIG use (p<.05). There also were significant model differences in participants’ endorsement of reasons based on their demography and ECIG behaviors. Conclusions This study shows that ECIG users are motivated to use ECIGs for many reasons. ECIG regulations should address these reasons for ECIG use in addition to smoking cessation. PMID:26803400
SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z; Folkert, M; Wang, J
2016-06-15
Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less
An approach for spherical harmonic analysis of non-smooth data
NASA Astrophysics Data System (ADS)
Wang, Hansheng; Wu, Patrick; Wang, Zhiyong
2006-12-01
A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.
On Measuring Quantitative Interpretations of Reasonable Doubt
ERIC Educational Resources Information Center
Dhami, Mandeep K.
2008-01-01
Beyond reasonable doubt represents a probability value that acts as the criterion for conviction in criminal trials. I introduce the membership function (MF) method as a new tool for measuring quantitative interpretations of reasonable doubt. Experiment 1 demonstrated that three different methods (i.e., direct rating, decision theory based, and…
NASA Astrophysics Data System (ADS)
Hu, Jinyan; Li, Li; Yang, Yunfeng
2017-06-01
The hierarchical and successive approximate registration method of non-rigid medical image based on the thin-plate splines is proposed in the paper. There are two major novelties in the proposed method. First, the hierarchical registration based on Wavelet transform is used. The approximate image of Wavelet transform is selected as the registered object. Second, the successive approximation registration method is used to accomplish the non-rigid medical images registration, i.e. the local regions of the couple images are registered roughly based on the thin-plate splines, then, the current rough registration result is selected as the object to be registered in the following registration procedure. Experiments show that the proposed method is effective in the registration process of the non-rigid medical images.
BRYNTRN: A baryon transport computer code, computation procedures and data base
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Chun, Sang Y.; Buck, Warren W.; Khan, Ferdous; Cucinotta, Frank
1988-01-01
The development is described of an interaction data base and a numerical solution to the transport of baryons through the arbitrary shield material based on a straight ahead approximation of the Boltzmann equation. The code is most accurate for continuous energy boundary values but gives reasonable results for discrete spectra at the boundary with even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O).
Proposal for a Joint NASA/KSAT Ka-band RF Propagation Terminal at Svalbard, Norway
NASA Technical Reports Server (NTRS)
Volosin, Jeffrey; Acosta, Roberto; Nessel, James; McCarthy, Kevin; Caroglanian, Armen
2010-01-01
This slide presentation discusses the placement of a Ka-band RF Propagation Terminal at Svalbard, Norway. The Near Earth Network (NEN) station would be managed by Kongsberg Satellite Services (KSAT) and would benefit NASA and KSAT. There are details of the proposed NASA/KSAT campaign, and the responsibilities each would agree to. There are several reasons for the placement, a primary reason is comparison with the Alaska site, Based on climatological similarities/differences with Alaska, Svalbard site expected to have good radiometer/beacon agreement approximately 99% of time.
Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.
Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E
2018-06-01
An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.
Fast Simulations of Gas Sloshing and Cold Front Formation
NASA Technical Reports Server (NTRS)
Roediger, E.; ZuHone, J. A.
2011-01-01
We present a simplified and fast method for simulating minor mergers between galaxy clusters. Instead of following the evolution of the dark matter halos directly by the N-body method, we employ a rigid potential approximation for both clusters. The simulations are run in the rest frame of the more massive cluster and account for the resulting inertial accelerations in an optimised way. We test the reliability of this method for studies of minor merger induced gas sloshing by performing a one-to-one comparison between our simulations and hydro+N-body ones. We find that the rigid potential approximation reproduces the sloshing-related features well except for two artefacts: the temperature just outside the cold fronts is slightly over-predicted, and the outward motion of the cold fronts is delayed by typically 200 Myr. We discuss reasons for both artefacts.
Fast Simulations of Gas Sloshing and Cold Front Formation
NASA Technical Reports Server (NTRS)
Roediger, E.; ZuHone, J. A.
2012-01-01
We present a simplified and fast method for simulating minor mergers between galaxy clusters. Instead of following the evolution of the dark matter halos directly by the N-body method, we employ a rigid potential approximation for both clusters. The simulations are run in the rest frame of the more massive cluster and account for the resulting inertial accelerations in an optimised way. We test the reliability of this method for studies of minor merger induced gas sloshing by performing a one-to-one comparison between our simulations and hydro+N-body ones. We find that the rigid potential approximation reproduces the sloshing-related features well except for two artifacts: the temperature just outside the cold fronts is slightly over-predicted, and the outward motion of the cold fronts is delayed by typically 200 Myr. We discuss reasons for both artifacts.
An approximation method for configuration optimization of trusses
NASA Technical Reports Server (NTRS)
Hansen, Scott R.; Vanderplaats, Garret N.
1988-01-01
Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The method presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical optimizer minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other methods are made. It is shown that the method of forming an approximate structural analysis based on linearized member forces leads to a highly efficient method of truss configuration optimization.
Women's reasons for choosing abortion method: A systematic literature review.
Kanstrup, Charlotte; Mäkelä, Marjukka; Hauskov Graungaard, Anette
2017-07-01
We aim to describe and classify reasons behind women's choice between medical and surgical abortion. A systematic literature review was conducted in PubMed and PsycINFO in October 2015. The subjects were women in early pregnancy opting for abortion at clinics or hospitals in high-income countries. We extracted women's reasons for choice of abortion method and analysed these qualitatively, looking at main reasons for choosing either medical or surgical abortion. Reasons for choice of method were classified to five main groups: technical nature of the intervention, fear of complications, fear of surgery or anaesthesia, timing and sedation. Reasons for selecting medical abortion were often based on the perception of the method being 'more natural' and the wish to have abortion in one's home in addition to fear of complications. Women who opted for surgical abortion appreciated the quicker process, viewed it as the safer option, and wished to avoid pain and excess bleeding. Reasons were often based on emotional reactions, previous experiences and a lack of knowledge about the procedures. Some topics such as pain or excess bleeding received little attention. Overall the quality of the studies was low, most studies were published more than 10 years ago, and the generalisability of the findings was poor. Women did not base their choice of abortion method only on rational information from professionals but also on emotions and especially fears. Support techniques for a more informed choice are needed. Recent high-quality studies in this area are lacking.
Decay of Far-Flowfield in Trailing Vortices
NASA Technical Reports Server (NTRS)
Baldwin, B. S.; Chigier, N. A.; Sheaffer, Y. S.
1973-01-01
Methods for reduction of velocities in trailing vortices of large aircraft are of current interest for the purpose of shortening the waiting time between landings at central airports. We have made finite-difference calculations of the flow in turbulent wake vortices as an aid to interpretation of wind-tunnel and flight experiments directed toward that end. Finite-difference solutions are capable of adding flexibility to such investigations if they are based on an adequate model of turbulence. Interesting developments have been taking place in the knowledge of turbulence that may lead to a complete theory in the future. In the meantime, approximate methods that yield reasonable agreement with experiment are appropriate. The simplified turbulence model we have selected contains features that account for the major effects disclosed by more sophisticated models in which the parameters are not yet established. Several puzzles are thereby resolved that arose in previous theoretical investigations of wake vortices.
Designing the optimal shutter sequences for the flutter shutter imaging method
NASA Astrophysics Data System (ADS)
Jelinek, Jan
2010-04-01
Acquiring iris or face images of moving subjects at larger distances using a flash to prevent the motion blur quickly runs into eye safety concerns as the acquisition distance is increased. For that reason the flutter shutter method recently proposed by Raskar et al.has generated considerable interest in the biometrics community. The paper concerns the design of shutter sequences that produce the best images. The number of possible sequences grows exponentially in both the subject' s motion velocity and desired exposure value, with their majority being useless. Because the exact solution leads to an intractable mixed integer programming problem, we propose an approximate solution based on pre - screening the sequences according to the distribution of roots in their Fourier transform. A very fast algorithm utilizing the Jury' s criterion allows the testing to be done without explicitly computing the roots, making the approach practical for moderately long sequences.
NASA Astrophysics Data System (ADS)
Kitagawa, Yuya; Akinaga, Yoshinobu; Kawashima, Yukio; Jung, Jaewoon; Ten-no, Seiichiro
2012-06-01
A QM/MM (quantum-mechanical/molecular-mechanical) molecular-dynamics approach based on the generalized hybrid-orbital (GHO) method, in conjunction with the second-order perturbation (MP2) theory and the second-order approximate coupled-cluster (CC2) model, is employed to calculate electronic property accounting for a protein environment. Circular dichroism (CD) spectra originating from chiral disulfide bridges of oxytocin and insulin at room temperature are computed. It is shown that the sampling of thermal fluctuation of molecular geometries facilitated by the GHO-MD method plays an important role in the obtained spectra. It is demonstrated that, while the protein environments in an oxytocin molecule have significant electrostatic influence on its chiral center, it is compensated by solvent induced charges. This gives a reasonable explanation to experimental observations. GHO-MD simulations starting from different experimental structures of insulin indicate that existence of the disulfide bridges with negative dihedral angles is crucial.
Electronic properties of excess Cr at Fe site in FeCr{sub 0.02}Se alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Sandeep, E-mail: sandeepk.iitb@gmail.com; Singh, Prabhakar P.
2015-06-24
We have studied the effect of substitution of transition-metal chromium (Cr) in excess on Fe sub-lattice in the electronic structure of iron-selenide alloys, FeCr{sub 0.02}Se. In our calculations, we used Korringa-Kohn-Rostoker coherent potential approximation method in the atomic sphere approximation (KKR-ASA-CPA). We obtained different band structure of this alloy with respect to the parent FeSe and this may be reason of changing their superconducting properties. We did unpolarized calculations for FeCr{sub 0.02}Se alloy in terms of density of states (DOS) and Fermi surfaces. The local density approximation (LDA) is used in terms of exchange correlation potential.
Varughese, J K; Wentzel-Larsen, T; Vassbotn, F; Moen, G; Lund-Johansen, M
2010-04-01
In this volumetric study of the vestibular schwannoma, we evaluated the accuracy and reliability of several approximation methods that are in use, and determined the minimum volume difference that needs to be measured for it to be attributable to an actual difference rather than a retest error. We also found empirical proportionality coefficients for the different methods. DESIGN/SETTING AND PARTICIPANTS: Methodological study with investigation of three different VS measurement methods compared to a reference method that was based on serial slice volume estimates. These volume estimates were based on: (i) one single diameter, (ii) three orthogonal diameters or (iii) the maximal slice area. Altogether 252 T1-weighted MRI images with gadolinium contrast, from 139 VS patients, were examined. The retest errors, in terms of relative percentages, were determined by undertaking repeated measurements on 63 scans for each method. Intraclass correlation coefficients were used to assess the agreement between each of the approximation methods and the reference method. The tendency for approximation methods to systematically overestimate/underestimate different-sized tumours was also assessed, with the help of Bland-Altman plots. The most commonly used approximation method, the maximum diameter, was the least reliable measurement method and has inherent weaknesses that need to be considered. This includes greater retest errors than area-based measurements (25% and 15%, respectively), and that it was the only approximation method that could not easily be converted into volumetric units. Area-based measurements can furthermore be more reliable for smaller volume differences than diameter-based measurements. All our findings suggest that the maximum diameter should not be used as an approximation method. We propose the use of measurement modalities that take into account growth in multiple dimensions instead.
An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.
Singh, Parth Raj; Wang, Yide; Chargé, Pascal
2017-03-30
In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.
A method for diagnosing time dependent faults using model-based reasoning systems
NASA Technical Reports Server (NTRS)
Goodrich, Charles H.
1995-01-01
This paper explores techniques to apply model-based reasoning to equipment and systems which exhibit dynamic behavior (that which changes as a function of time). The model-based system of interest is KATE-C (Knowledge based Autonomous Test Engineer) which is a C++ based system designed to perform monitoring and diagnosis of Space Shuttle electro-mechanical systems. Methods of model-based monitoring and diagnosis are well known and have been thoroughly explored by others. A short example is given which illustrates the principle of model-based reasoning and reveals some limitations of static, non-time-dependent simulation. This example is then extended to demonstrate representation of time-dependent behavior and testing of fault hypotheses in that environment.
Lecture-based versus problem-based learning in ethics education among nursing students.
Khatiban, Mahnaz; Falahan, Seyede Nayereh; Amini, Roya; Farahanchi, Afshin; Soltanian, Alireza
2018-01-01
Moral reasoning is a vital skill in the nursing profession. Teaching moral reasoning to students is necessary toward promoting nursing ethics. The aim of this study was to compare the effectiveness of problem-based learning and lecture-based methods in ethics education in improving (1) moral decision-making, (2) moral reasoning, (3) moral development, and (4) practical reasoning among nursing students. This is a repeated measurement quasi-experimental study. Participants and research context: The participants were nursing students in a University of Medical Sciences in west of Iran who were randomly assigned to the lecture-based (n = 33) or the problem-based learning (n = 33) groups. The subjects were provided nursing ethics education in four 2-h sessions. The educational content was similar, but the training methods were different. The subjects completed the Nursing Dilemma Test before, immediately after, and 1 month after the training. The data were analyzed and compared using the SPSS-16 software. Ethical considerations: The program was explained to the students, all of whom signed an informed consent form at the baseline. The two groups were similar in personal characteristics (p > 0.05). A significant improvement was observed in the mean scores on moral development in the problem-based learning compared with the lecture-based group (p < 0.05). Although the mean scores on moral reasoning improved in both the problem-based learning and the lecture-based groups immediately after the training and 1 month later, the change was significant only in the problem-based learning group (p < 0.05). The mean scores on moral decision-making, practical considerations, and familiarity with dilemmas were relatively similar for the two groups. The use of the problem-based learning method in ethics education enhances moral development among nursing students. However, further studies are needed to determine whether such method improves moral decision-making, moral reasoning, practical considerations, and familiarity with the ethical issues among nursing students.
Orban, Kristina; Ekelin, Maria; Edgren, Gudrun; Sandgren, Olof; Hovbrandt, Pia; Persson, Eva K
2017-09-11
Outcome- or competency-based education is well established in medical and health sciences education. Curricula are based on courses where students develop their competences and assessment is also usually course-based. Clinical reasoning is an important competence, and the aim of this study was to monitor and describe students' progression in professional clinical reasoning skills during health sciences education using observations of group discussions following the case method. In this qualitative study students from three different health education programmes were observed while discussing clinical cases in a modified Harvard case method session. A rubric with four dimensions - problem-solving process, disciplinary knowledge, character of discussion and communication - was used as an observational tool to identify clinical reasoning. A deductive content analysis was performed. The results revealed the students' transition over time from reasoning based strictly on theoretical knowledge to reasoning ability characterized by clinical considerations and experiences. Students who were approaching the end of their education immediately identified the most important problem and then focused on this in their discussion. Practice knowledge increased over time, which was seen as progression in the use of professional language, concepts, terms and the use of prior clinical experience. The character of the discussion evolved from theoretical considerations early in the education to clinical reasoning in later years. Communication within the groups was supportive and conducted with a professional tone. Our observations revealed progression in several aspects of students' clinical reasoning skills on a group level in their discussions of clinical cases. We suggest that the case method can be a useful tool in assessing quality in health sciences education.
Approximate Model Checking of PCTL Involving Unbounded Path Properties
NASA Astrophysics Data System (ADS)
Basu, Samik; Ghosh, Arka P.; He, Ru
We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as
Stochastic reconstructions of spectral functions: Application to lattice QCD
NASA Astrophysics Data System (ADS)
Ding, H.-T.; Kaczmarek, O.; Mukherjee, Swagato; Ohno, H.; Shu, H.-T.
2018-05-01
We present a detailed study of the applications of two stochastic approaches, stochastic optimization method (SOM) and stochastic analytical inference (SAI), to extract spectral functions from Euclidean correlation functions. SOM has the advantage that it does not require prior information. On the other hand, SAI is a more generalized method based on Bayesian inference. Under mean field approximation SAI reduces to the often-used maximum entropy method (MEM) and for a specific choice of the prior SAI becomes equivalent to SOM. To test the applicability of these two stochastic methods to lattice QCD, firstly, we apply these methods to various reasonably chosen model correlation functions and present detailed comparisons of the reconstructed spectral functions obtained from SOM, SAI and MEM. Next, we present similar studies for charmonia correlation functions obtained from lattice QCD computations using clover-improved Wilson fermions on large, fine, isotropic lattices at 0.75 and 1.5 Tc, Tc being the deconfinement transition temperature of a pure gluon plasma. We find that SAI and SOM give consistent results to MEM at these two temperatures.
Comment on “On the quantum theory of molecules” [J. Chem. Phys. 137, 22A544 (2012)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutcliffe, Brian T., E-mail: bsutclif@ulb.ac.be; Woolley, R. Guy
2014-01-21
In our previous paper [B. T. Sutcliffe and R. G. Woolley, J. Chem. Phys. 137, 22A544 (2012)] we argued that the Born-Oppenheimer approximation could not be based on an exact transformation of the molecular Schrödinger equation. In this Comment we suggest that the fundamental reason for the approximate nature of the Born-Oppenheimer model is the lack of a complete set of functions for the electronic space, and the need to describe the continuous spectrum using spectral projection.
Mississippi Labor Mobility Demonstration Project--Relocating the Unemployed: Dimensions of Success.
ERIC Educational Resources Information Center
Speight, John F.; And Others
The document provides an analysis of relocation stability of individuals relocated during the March, 1970-November, 1971 contract period. Data bases were 1,244 applicants with screening information and 401 individuals with follow-up interview information. Approximately one half were in new areas six months after being relocated. Reasons for…
Khakzad, Nima; Khan, Faisal; Amyotte, Paul
2015-07-01
Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well-established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents' relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor-based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates. © 2015 Society for Risk Analysis.
Expert system for web based collaborative CAE
NASA Astrophysics Data System (ADS)
Hou, Liang; Lin, Zusheng
2006-11-01
An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.
NASA Astrophysics Data System (ADS)
Athy, Jeremy; Friedrich, Jeff; Delany, Eileen
2008-05-01
Egon Brunswik (1903 1955) first made an interesting distinction between perception and explicit reasoning, arguing that perception included quick estimates of an object’s size, nearly always resulting in good approximations in uncertain environments, whereas explicit reasoning, while better at achieving exact estimates, could often fail by wide margins. An experiment conducted by Brunswik to investigate these ideas was never published and the only available information is a figure of the results presented in a posthumous book in 1956. We replicated and extended his study to gain insight into the procedures Brunswik used in obtaining his results. Explicit reasoning resulted in fewer errors, yet more extreme ones than perception. Brunswik’s graphical analysis of the results led to different conclusions, however, than did a modern statistically-based analysis.
Dosimetric evaluation of intrafractional tumor motion by means of a robot driven phantom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richter, Anne; Wilbert, Juergen; Flentje, Michael
2011-10-15
Purpose: The aim of the work was to investigate the influence of intrafractional tumor motion to the accumulated (absorbed) dose. The accumulated dose was determined by means of calculations and measurements with a robot driven motion phantom. Methods: Different motion scenarios and compensation techniques were realized in a phantom study to investigate the influence of motion on image acquisition, dose calculation, and dose measurement. The influence of motion on the accumulated dose was calculated by employing two methods (a model based and a voxel based method). Results: Tumor motion resulted in a blurring of steep dose gradients and a reductionmore » of dose at the periphery of the target. A systematic variation of motion parameters allowed the determination of the main influence parameters on the accumulated dose. The key parameters with the greatest influence on dose were the mean amplitude and the pattern of motion. Investigations on necessary safety margins to compensate for dose reduction have shown that smaller safety margins are sufficient, if the developed concept with optimized margins (OPT concept) was used instead of the standard internal target volume (ITV) concept. Both calculation methods were a reasonable approximation of the measured dose with the voxel based method being in better agreement with the measurements. Conclusions: Further evaluation of available systems and algorithms for dose accumulation are needed to create guidelines for the verification of the accumulated dose.« less
Exponential Methods for the Time Integration of Schroedinger Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cano, B.; Gonzalez-Pachon, A.
2010-09-30
We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.
An efficient linear-scaling CCSD(T) method based on local natural orbitals.
Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály
2013-09-07
An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)] and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)] with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Mo̸ller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.
Estimating ice particle scattering properties using a modified Rayleigh-Gans approximation
NASA Astrophysics Data System (ADS)
Lu, Yinghui; Clothiaux, Eugene E.; Aydin, Kültegin; Verlinde, Johannes
2014-09-01
A modification to the Rayleigh-Gans approximation is made that includes self-interactions between different parts of an ice crystal, which both improves the accuracy of the Rayleigh-Gans approximation and extends its applicability to polarization-dependent parameters. This modified Rayleigh-Gans approximation is both efficient and reasonably accurate for particles with at least one dimension much smaller than the wavelength (e.g., dendrites at millimeter or longer wavelengths) or particles with sparse structures (e.g., low-density aggregates). Relative to the Generalized Multiparticle Mie method, backscattering reflectivities at horizontal transmit and receive polarization (HH) (ZHH) computed with this modified Rayleigh-Gans approach are about 3 dB more accurate than with the traditional Rayleigh-Gans approximation. For realistic particle size distributions and pristine ice crystals the modified Rayleigh-Gans approach agrees with the Generalized Multiparticle Mie method to within 0.5 dB for ZHH whereas for the polarimetric radar observables differential reflectivity (ZDR) and specific differential phase (KDP) agreement is generally within 0.7 dB and 13%, respectively. Compared to the A-DDA code, the modified Rayleigh-Gans approximation is several to tens of times faster if scattering properties for different incident angles and particle orientations are calculated. These accuracies and computational efficiencies are sufficient to make this modified Rayleigh-Gans approach a viable alternative to the Rayleigh-Gans approximation in some applications such as millimeter to centimeter wavelength radars and to other methods that assume simpler, less accurate shapes for ice crystals. This method should not be used on materials with dielectric properties much different from ice and on compact particles much larger than the wavelength.
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
NASA Astrophysics Data System (ADS)
Tian, Xin; Li, Hua; Jiang, Xiaoyu; Xie, Jingping; Gore, John C.; Xu, Junzhong
2017-02-01
Two diffusion-based approaches, CG (constant gradient) and FEXI (filtered exchange imaging) methods, have been previously proposed for measuring transcytolemmal water exchange rate constant kin, but their accuracy and feasibility have not been comprehensively evaluated and compared. In this work, both computer simulations and cell experiments in vitro were performed to evaluate these two methods. Simulations were done with different cell diameters (5, 10, 20 μm), a broad range of kin values (0.02-30 s-1) and different SNR's, and simulated kin's were directly compared with the ground truth values. Human leukemia K562 cells were cultured and treated with saponin to selectively change cell transmembrane permeability. The agreement between measured kin's of both methods was also evaluated. The results suggest that, without noise, the CG method provides reasonably accurate estimation of kin especially when it is smaller than 10 s-1, which is in the typical physiological range of many biological tissues. However, although the FEXI method overestimates kin even with corrections for the effects of extracellular water fraction, it provides reasonable estimates with practical SNR's and more importantly, the fitted apparent exchange rate AXR showed approximately linear dependence on the ground truth kin. In conclusion, either CG or FEXI method provides a sensitive means to characterize the variations in transcytolemmal water exchange rate constant kin, although the accuracy and specificity is usually compromised. The non-imaging CG method provides more accurate estimation of kin, but limited to large volume-of-interest. Although the accuracy of FEXI is compromised with extracellular volume fraction, it is capable of spatially mapping kin in practice.
Rectal temperature-based death time estimation in infants.
Igari, Yui; Hosokai, Yoshiyuki; Funayama, Masato
2016-03-01
In determining the time of death in infants based on rectal temperature, the same methods used in adults are generally used. However, whether the methods for adults are suitable for infants is unclear. In this study, we examined the following 3 methods in 20 infant death cases: computer simulation of rectal temperature based on the infinite cylinder model (Ohno's method), computer-based double exponential approximation based on Marshall and Hoare's double exponential model with Henssge's parameter determination (Henssge's method), and computer-based collinear approximation based on extrapolation of the rectal temperature curve (collinear approximation). The interval between the last time the infant was seen alive and the time that he/she was found dead was defined as the death time interval and compared with the estimated time of death. In Ohno's method, 7 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80 min. The results of both Henssge's method and collinear approximation were apparently inferior to the results of Ohno's method. The corrective factor was set within the range of 0.7-1.3 in Henssge's method, and a modified program was newly developed to make it possible to change the corrective factors. Modification A, in which the upper limit of the corrective factor range was set as the maximum value in each body weight, produced the best results: 8 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80min. There was a possibility that the influence of thermal isolation on the actual infants was stronger than that previously shown by Henssge. We conclude that Ohno's method and Modification A are useful for death time estimation in infants. However, it is important to accept the estimated time of death with certain latitude considering other circumstances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Ghosh, Sujit K
2010-01-01
Bayesian methods are rapidly becoming popular tools for making statistical inference in various fields of science including biology, engineering, finance, and genetics. One of the key aspects of Bayesian inferential method is its logical foundation that provides a coherent framework to utilize not only empirical but also scientific information available to a researcher. Prior knowledge arising from scientific background, expert judgment, or previously collected data is used to build a prior distribution which is then combined with current data via the likelihood function to characterize the current state of knowledge using the so-called posterior distribution. Bayesian methods allow the use of models of complex physical phenomena that were previously too difficult to estimate (e.g., using asymptotic approximations). Bayesian methods offer a means of more fully understanding issues that are central to many practical problems by allowing researchers to build integrated models based on hierarchical conditional distributions that can be estimated even with limited amounts of data. Furthermore, advances in numerical integration methods, particularly those based on Monte Carlo methods, have made it possible to compute the optimal Bayes estimators. However, there is a reasonably wide gap between the background of the empirically trained scientists and the full weight of Bayesian statistical inference. Hence, one of the goals of this chapter is to bridge the gap by offering elementary to advanced concepts that emphasize linkages between standard approaches and full probability modeling via Bayesian methods.
Longitudinal studies of botulinum toxin in cervical dystonia: Why do patients discontinue therapy?
Jinnah, H A; Comella, Cynthia L; Perlmutter, Joel; Lungu, Codrin; Hallett, Mark
2018-06-01
Numerous studies have established botulinum toxin (BoNT) to be safe and effective for the treatment of cervical dystonia (CD). Despite its well-documented efficacy, there has been growing awareness that a significant proportion of CD patients discontinue therapy. The reasons for discontinuation are only partly understood. This summary describes longitudinal studies that provided information regarding the proportions of patients discontinuing BoNT therapy, and the reasons for discontinuing therapy. The data come predominantly from un-blinded long-term follow-up studies, registry studies, and patient-based surveys. All types of longitudinal studies provide strong evidence that BoNT is both safe and effective in the treatment of CD for many years. Overall, approximately one third of CD patients discontinue BoNT. The most common reason for discontinuing therapy is lack of benefit, often described as primary or secondary non-response. The apparent lack of response is only rarely related to true immune-mediated resistance to BoNT. Other reasons for discontinuing include side effects, inconvenience, cost, or other reasons. Although BoNT is safe and effective in the treatment of the majority of patients with CD, approximately one third discontinue. The increasing awareness of a significant proportion of patients who discontinue should encourage further efforts to optimize administration of BoNT, to improve BoNT preparations to extend duration or reduce side effects, to develop add-on therapies that may mitigate swings in symptom severity, or develop entirely novel treatment approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.
Population genetics inference for longitudinally-sampled mutants under strong selection.
Lacerda, Miguel; Seoighe, Cathal
2014-11-01
Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.
Softcopy quality ruler method: implementation and validation
NASA Astrophysics Data System (ADS)
Jin, Elaine W.; Keelan, Brian W.; Chen, Junqing; Phillips, Jonathan B.; Chen, Ying
2009-01-01
A softcopy quality ruler method was implemented for the International Imaging Industry Association (I3A) Camera Phone Image Quality (CPIQ) Initiative. This work extends ISO 20462 Part 3 by virtue of creating reference digital images of known subjective image quality, complimenting the hardcopy Standard Reference Stimuli (SRS). The softcopy ruler method was developed using images from a Canon EOS 1Ds Mark II D-SLR digital still camera (DSC) and a Kodak P880 point-and-shoot DSC. Images were viewed on an Apple 30in Cinema Display at a viewing distance of 34 inches. Ruler images were made for 16 scenes. Thirty ruler images were generated for each scene, representing ISO 20462 Standard Quality Scale (SQS) values of approximately 2 to 31 at an increment of one just noticeable difference (JND) by adjusting the system modulation transfer function (MTF). A Matlab GUI was developed to display the ruler and test images side-by-side with a user-adjustable ruler level controlled by a slider. A validation study was performed at Kodak, Vista Point Technology, and Aptina Imaging in which all three companies set up a similar viewing lab to run the softcopy ruler method. The results show that the three sets of data are in reasonable agreement with each other, with the differences within the range expected from observer variability. Compared to previous implementations of the quality ruler, the slider-based user interface allows approximately 2x faster assessments with 21.6% better precision.
Two-dimensional grid-free compressive beamforming.
Yang, Yang; Chu, Zhigang; Xu, Zhongming; Ping, Guoli
2017-08-01
Compressive beamforming realizes the direction-of-arrival (DOA) estimation and strength quantification of acoustic sources by solving an underdetermined system of equations relating microphone pressures to a source distribution via compressive sensing. The conventional method assumes DOAs of sources to lie on a grid. Its performance degrades due to basis mismatch when the assumption is not satisfied. To overcome this limitation for the measurement with plane microphone arrays, a two-dimensional grid-free compressive beamforming is developed. First, a continuum based atomic norm minimization is defined to denoise the measured pressure and thus obtain the pressure from sources. Next, a positive semidefinite programming is formulated to approximate the atomic norm minimization. Subsequently, a reasonably fast algorithm based on alternating direction method of multipliers is presented to solve the positive semidefinite programming. Finally, the matrix enhancement and matrix pencil method is introduced to process the obtained pressure and reconstruct the source distribution. Both simulations and experiments demonstrate that under certain conditions, the grid-free compressive beamforming can provide high-resolution and low-contamination imaging, allowing accurate and fast estimation of two-dimensional DOAs and quantification of source strengths, even with non-uniform arrays and noisy measurements.
Design of Composite Structures Using Knowledge-Based and Case Based Reasoning
NASA Technical Reports Server (NTRS)
Lambright, Jonathan Paul
1996-01-01
A method of using knowledge based and case based reasoning to assist designers during conceptual design tasks of composite structures was proposed. The cooperative use of heuristics, procedural knowledge, and previous similar design cases suggests a potential reduction in design cycle time and ultimately product lead time. The hypothesis of this work is that the design process of composite structures can be improved by using Case-Based Reasoning (CBR) and Knowledge-Based (KB) reasoning in the early design stages. The technique of using knowledge-based and case-based reasoning facilitates the gathering of disparate information into one location that is easily and readily available. The method suggests that the inclusion of downstream life-cycle issues into the conceptual design phase reduces potential of defective, and sub-optimal composite structures. Three industry experts were interviewed extensively. The experts provided design rules, previous design cases, and test problems. A Knowledge Based Reasoning system was developed using the CLIPS (C Language Interpretive Procedural System) environment and a Case Based Reasoning System was developed using the Design Memory Utility For Sharing Experiences (MUSE) xviii environment. A Design Characteristic State (DCS) was used to document the design specifications, constraints, and problem areas using attribute-value pair relationships. The DCS provided consistent design information between the knowledge base and case base. Results indicated that the use of knowledge based and case based reasoning provided a robust design environment for composite structures. The knowledge base provided design guidance from well defined rules and procedural knowledge. The case base provided suggestions on design and manufacturing techniques based on previous similar designs and warnings of potential problems and pitfalls. The case base complemented the knowledge base and extended the problem solving capability beyond the existence of limited well defined rules. The findings indicated that the technique is most effective when used as a design aid and not as a tool to totally automate the composites design process. Other areas of application and implications for future research are discussed.
Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.
Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin
2017-01-01
Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.
Elastic scattering of low-energy electrons by nitromethane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopes, A. R.; D'A Sanchez, S.; Bettega, M. H. F.
2011-06-15
In this work, we present integral, differential, and momentum transfer cross sections for elastic scattering of low-energy electrons by nitromethane, for energies up to 10 eV. We calculated the cross sections using the Schwinger multichannel method with pseudopotentials, in the static-exchange and in the static-exchange plus polarization approximations. The computed integral cross sections show a {pi}* shape resonance at 0.70 eV in the static-exchange-polarization approximation, which is in reasonable agreement with experimental data. We also found a {sigma}* shape resonance at 4.8 eV in the static-exchange-polarization approximation, which has not been previously characterized by the experiment. We also discuss howmore » these resonances may play a role in the dissociation process of this molecule.« less
Voluntary Withdrawal: Why Don't They Return?
ERIC Educational Resources Information Center
Ironside, Ellen M.
Factors that influence voluntary withdrawal from the University of North Carolina at Chapel Hill are investigated. A survey based on a cohort of students admitted for the first time in fall 1977 was conducted with a response rate of approximately 50 percent. Major and minor reasons for not returning to the university are tabulated for males and…
Multimodal far-field acoustic radiation pattern: An approximate equation
NASA Technical Reports Server (NTRS)
Rice, E. J.
1977-01-01
The far-field sound radiation theory for a circular duct was studied for both single mode and multimodal inputs. The investigation was intended to develop a method to determine the acoustic power produced by turbofans as a function of mode cut-off ratio. With reasonable simplifying assumptions the single mode radiation pattern was shown to be reducible to a function of mode cut-off ratio only. With modal cut-off ratio as the dominant variable, multimodal radiation patterns can be reduced to a simple explicit expression. This approximate expression provides excellent agreement with an exact calculation of the sound radiation pattern using equal acoustic power per mode.
Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip
2007-01-01
This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.
Trajectories for High Specific Impulse High Specific Power Deep Space Exploration
NASA Technical Reports Server (NTRS)
Polsgrove, T.; Adams, R. B.; Brady, Hugh J. (Technical Monitor)
2002-01-01
Preliminary results are presented for two methods to approximate the mission performance of high specific impulse high specific power vehicles. The first method is based on an analytical approximation derived by Williams and Shepherd and can be used to approximate mission performance to outer planets and interstellar space. The second method is based on a parametric analysis of trajectories created using the well known trajectory optimization code, VARITOP. This parametric analysis allows the reader to approximate payload ratios and optimal power requirements for both one-way and round-trip missions. While this second method only addresses missions to and from Jupiter, future work will encompass all of the outer planet destinations and some interstellar precursor missions.
A Gaussian-based rank approximation for subspace clustering
NASA Astrophysics Data System (ADS)
Xu, Fei; Peng, Chong; Hu, Yunhong; He, Guoping
2018-04-01
Low-rank representation (LRR) has been shown successful in seeking low-rank structures of data relationships in a union of subspaces. Generally, LRR and LRR-based variants need to solve the nuclear norm-based minimization problems. Beyond the success of such methods, it has been widely noted that the nuclear norm may not be a good rank approximation because it simply adds all singular values of a matrix together and thus large singular values may dominant the weight. This results in far from satisfactory rank approximation and may degrade the performance of lowrank models based on the nuclear norm. In this paper, we propose a novel nonconvex rank approximation based on the Gaussian distribution function, which has demanding properties to be a better rank approximation than the nuclear norm. Then a low-rank model is proposed based on the new rank approximation with application to motion segmentation. Experimental results have shown significant improvements and verified the effectiveness of our method.
Sum-rule corrections: a route to error cancellations in correlation matrix renormalisation theory
NASA Astrophysics Data System (ADS)
Liu, C.; Liu, J.; Yao, Y. X.; Wang, C. Z.; Ho, K. M.
2017-03-01
We recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a more accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.
NASA Astrophysics Data System (ADS)
Zolotarev, Pavel; Eremin, Roman
2018-04-01
Modification of existing solid electrolyte and cathode materialsis a topic of interest for theoreticians and experimentalists. In particular, itrequires elucidation of the influence of dopants on the characteristics of thestudying materials. For the reason of high complexity of theconfigurational space of doped/deintercalated systems, application of thecomputer modeling approaches is hindered, despite significant advances ofcomputational facilities in last decades. In this study, we propose a scheme,which allows to reduce a set of structures of a modeled configurationalspace for the subsequent study by means of the time-consuming quantumchemistry methods. Application of the proposed approach is exemplifiedthrough the study of the configurational space of the commercialLiNi0.8Co0.15Al0.05O2 (NCA) cathode material approximant.
NASA Astrophysics Data System (ADS)
Develaki, Maria
2017-11-01
Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.
Brain Imaging, Forward Inference, and Theories of Reasoning
Heit, Evan
2015-01-01
This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities. PMID:25620926
Brain imaging, forward inference, and theories of reasoning.
Heit, Evan
2014-01-01
This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities.
Aben, Ilse; Tanzi, Cristina P; Hartmann, Wouter; Stam, Daphne M; Stammes, Piet
2003-06-20
A method is presented for in-flight validation of space-based polarization measurements based on approximation of the direction of polarization of scattered sunlight by the Rayleigh single-scattering value. This approximation is verified by simulations of radiative transfer calculations for various atmospheric conditions. The simulations show locations along an orbit where the scattering geometries are such that the intensities of the parallel and orthogonal polarization components of the light are equal, regardless of the observed atmosphere and surface. The method can be applied to any space-based instrument that measures the polarization of reflected solar light. We successfully applied the method to validate the Global Ozone Monitoring Experiment (GOME) polarization measurements. The error in the GOME's three broadband polarization measurements appears to be approximately 1%.
Pos, Edwin; Guevara Andino, Juan Ernesto; Sabatier, Daniel; Molino, Jean-François; Pitman, Nigel; Mogollón, Hugo; Neill, David; Cerón, Carlos; Rivas-Torres, Gonzalo; Di Fiore, Anthony; Thomas, Raquel; Tirado, Milton; Young, Kenneth R; Wang, Ophelia; Sierra, Rodrigo; García-Villacorta, Roosevelt; Zagt, Roderick; Palacios Cuenca, Walter; Aulestia, Milton; Ter Steege, Hans
2017-06-01
With many sophisticated methods available for estimating migration, ecologists face the difficult decision of choosing for their specific line of work. Here we test and compare several methods, performing sanity and robustness tests, applying to large-scale data and discussing the results and interpretation. Five methods were selected to compare for their ability to estimate migration from spatially implicit and semi-explicit simulations based on three large-scale field datasets from South America (Guyana, Suriname, French Guiana and Ecuador). Space was incorporated semi-explicitly by a discrete probability mass function for local recruitment, migration from adjacent plots or from a metacommunity. Most methods were able to accurately estimate migration from spatially implicit simulations. For spatially semi-explicit simulations, estimation was shown to be the additive effect of migration from adjacent plots and the metacommunity. It was only accurate when migration from the metacommunity outweighed that of adjacent plots, discrimination, however, proved to be impossible. We show that migration should be considered more an approximation of the resemblance between communities and the summed regional species pool. Application of migration estimates to simulate field datasets did show reasonably good fits and indicated consistent differences between sets in comparison with earlier studies. We conclude that estimates of migration using these methods are more an approximation of the homogenization among local communities over time rather than a direct measurement of migration and hence have a direct relationship with beta diversity. As betadiversity is the result of many (non)-neutral processes, we have to admit that migration as estimated in a spatial explicit world encompasses not only direct migration but is an ecological aggregate of these processes. The parameter m of neutral models then appears more as an emerging property revealed by neutral theory instead of being an effective mechanistic parameter and spatially implicit models should be rejected as an approximation of forest dynamics.
NASA Astrophysics Data System (ADS)
Chen, Liping; Zheng, Renhui; Shi, Qiang; Yan, YiJing
2010-01-01
We extend our previous study of absorption line shapes of molecular aggregates using the Liouville space hierarchical equations of motion (HEOM) method [L. P. Chen, R. H. Zheng, Q. Shi, and Y. J. Yan, J. Chem. Phys. 131, 094502 (2009)] to calculate third order optical response functions and two-dimensional electronic spectra of model dimers. As in our previous work, we have focused on the applicability of several approximate methods related to the HEOM method. We show that while the second order perturbative quantum master equations are generally inaccurate in describing the peak shapes and solvation dynamics, they can give reasonable peak amplitude evolution even in the intermediate coupling regime. The stochastic Liouville equation results in good peak shapes, but does not properly describe the excited state dynamics due to the lack of detailed balance. A modified version of the high temperature approximation to the HEOM gives the best agreement with the exact result.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Three-dimensional inversion of multisource array electromagnetic data
NASA Astrophysics Data System (ADS)
Tartaras, Efthimios
Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.
Meta-analysis of two studies in the presence of heterogeneity with applications in rare diseases.
Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat
2017-07-01
Random-effects meta-analyses are used to combine evidence of treatment effects from multiple studies. Since treatment effects may vary across trials due to differences in study characteristics, heterogeneity in treatment effects between studies must be accounted for to achieve valid inference. The standard model for random-effects meta-analysis assumes approximately normal effect estimates and a normal random-effects model. However, standard methods based on this model ignore the uncertainty in estimating the between-trial heterogeneity. In the special setting of only two studies and in the presence of heterogeneity, we investigate here alternatives such as the Hartung-Knapp-Sidik-Jonkman method (HKSJ), the modified Knapp-Hartung method (mKH, a variation of the HKSJ method) and Bayesian random-effects meta-analyses with priors covering plausible heterogeneity values; R code to reproduce the examples is presented in an appendix. The properties of these methods are assessed by applying them to five examples from various rare diseases and by a simulation study. Whereas the standard method based on normal quantiles has poor coverage, the HKSJ and mKH generally lead to very long, and therefore inconclusive, confidence intervals. The Bayesian intervals on the whole show satisfying properties and offer a reasonable compromise between these two extremes. © 2016 The Authors. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sound scattering by several zooplankton groups. II. Scattering models.
Stanton, T K; Chu, D; Wiebe, P H
1998-01-01
Mathematical scattering models are derived and compared with data from zooplankton from several gross anatomical groups--fluidlike, elastic shelled, and gas bearing. The models are based upon the acoustically inferred boundary conditions determined from laboratory backscattering data presented in part I of this series [Stanton et al., J. Acoust. Soc. Am. 103, 225-235 (1998)]. The models use a combination of ray theory, modal-series solution, and distorted wave Born approximation (DWBA). The formulations, which are inherently approximate, are designed to include only the dominant scattering mechanisms as determined from the experiments. The models for the fluidlike animals (euphausiids in this case) ranged from the simplest case involving two rays, which could qualitatively describe the structure of target strength versus frequency for single pings, to the most complex case involving a rough inhomogeneous asymmetrically tapered bent cylinder using the DWBA-based formulation which could predict echo levels over all angles of incidence (including the difficult region of end-on incidence). The model for the elastic shelled body (gastropods in this case) involved development of an analytical model which takes into account irregularities and discontinuities of the shell. The model for gas-bearing animals (siphonophores) is a hybrid model which is composed of the summation of the exact solution to the gas sphere and the approximate DWBA-based formulation for arbitrarily shaped fluidlike bodies. There is also a simplified ray-based model for the siphonophore. The models are applied to data involving single pings, ping-to-ping variability, and echoes averaged over many pings. There is reasonable qualitative agreement between the predictions and single ping data, and reasonable quantitative agreement between the predictions and variability and averages of echo data.
Merisier, Sophia; Larue, Caroline; Boyer, Louise
2018-06-01
Problem-based learning is an educational method promoting clinical reasoning that has been implemented in many fields of health education. Questioning is a learning strategy often employed in problem-based learning sessions. To explore what is known about the influence of questioning on the promotion of clinical reasoning of students in health care education, specifically in the field of nursing and using the educational method of problem-based learning. A scoping review following Arksey and O'Malley's five stages was conducted. The CINAHL, EMBASE, ERIC, Medline, and PubMed databases were searched for articles published between the years of 2000 and 2017. Each article was summarized and analyzed using a data extraction sheet in relation to its purpose, population group, setting, methods, and results. A descriptive explication of the studies based on an inductive analysis of their findings to address the aim of the review was made. Nineteen studies were included in the analysis. The studies explored the influence of questioning on critical thinking rather than on clinical reasoning. The nature of the questions asked and the effect of higher-order questions on critical thinking were the most commonly occurring themes. Few studies addressed the use of questioning in problem-based learning. More empirical evidence is needed to gain a better understanding of the benefit of questioning in problem-based learning to promote students' clinical reasoning. Copyright © 2018 Elsevier Ltd. All rights reserved.
DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik
2015-11-01
This study was designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and 1 year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal actor-partner interdependence model) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. © 2015 The British Psychological Society.
DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik
2015-01-01
This study is designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and one year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal Actor-Partner Interdependence Models) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. PMID:26402901
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
More on approximations of Poisson probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kao, C
1980-05-01
Calculation of Poisson probabilities frequently involves calculating high factorials, which becomes tedious and time-consuming with regular calculators. The usual way to overcome this difficulty has been to find approximations by making use of the table of the standard normal distribution. A new transformation proposed by Kao in 1978 appears to perform better for this purpose than traditional transformations. In the present paper several approximation methods are stated and compared numerically, including an approximation method that utilizes a modified version of Kao's transformation. An approximation based on a power transformation was found to outperform those based on the square-root type transformationsmore » as proposed in literature. The traditional Wilson-Hilferty approximation and Makabe-Morimura approximation are extremely poor compared with this approximation. 4 tables. (RWR)« less
ERIC Educational Resources Information Center
Sarsani, Mahender Reddy
2008-01-01
Reasoning and learning are closely related, both being the methods of solving problems, learning usually results from the process of reasoning. All inventions, discoveries, art, literature and advances in culture and civilization are based on thinking, reasoning and problem solving capacity of human being. A sound reasoning leads to better…
Reasons for low influenza vaccination coverage – a cross-sectional survey in Poland
Kardas, Przemyslaw; Zasowska, Anna; Dec, Joanna; Stachurska, Magdalena
2011-01-01
Aim To assess the reasons for low influenza vaccination coverage in Poland, including knowledge of influenza and attitudes toward influenza vaccination. Methods This was a cross-sectional, anonymous, self-administered survey in primary care patients in Lodzkie voivodship (central Poland). The study participants were adults who visited their primary care physicians for various reasons from January 1 to April 30, 2007. Results Six hundred and forty participants completed the survey. In 12 months before the study, 20.8% participants had received influenza vaccination. The most common reasons listed by those who had not been vaccinated were good health (27.6%), lack of trust in vaccination effectiveness (16.8%), and the cost of vaccination (9.7%). The most common source of information about influenza vaccination were primary care physicians (46.6%). Despite reasonably good knowledge of influenza, as many as approximately 20% of participants could not point out any differences between influenza and other viral respiratory tract infections. Conclusions The main reasons for low influenza vaccination coverage in Poland were patients’ misconceptions and the cost of vaccination. Therefore, free-of-charge vaccination and more effective informational campaigns are needed, with special focus on high-risk groups. PMID:21495194
A three dimensional point cloud registration method based on rotation matrix eigenvalue
NASA Astrophysics Data System (ADS)
Wang, Chao; Zhou, Xiang; Fei, Zixuan; Gao, Xiaofei; Jin, Rui
2017-09-01
We usually need to measure an object at multiple angles in the traditional optical three-dimensional measurement method, due to the reasons for the block, and then use point cloud registration methods to obtain a complete threedimensional shape of the object. The point cloud registration based on a turntable is essential to calculate the coordinate transformation matrix between the camera coordinate system and the turntable coordinate system. We usually calculate the transformation matrix by fitting the rotation center and the rotation axis normal of the turntable in the traditional method, which is limited by measuring the field of view. The range of exact feature points used for fitting the rotation center and the rotation axis normal is approximately distributed within an arc less than 120 degrees, resulting in a low fit accuracy. In this paper, we proposes a better method, based on the invariant eigenvalue principle of rotation matrix in the turntable coordinate system and the coordinate transformation matrix of the corresponding coordinate points. First of all, we control the rotation angle of the calibration plate with the turntable to calibrate the coordinate transformation matrix of the corresponding coordinate points by using the least squares method. And then we use the feature decomposition to calculate the coordinate transformation matrix of the camera coordinate system and the turntable coordinate system. Compared with the traditional previous method, it has a higher accuracy, better robustness and it is not affected by the camera field of view. In this method, the coincidence error of the corresponding points on the calibration plate after registration is less than 0.1mm.
Reasons for electronic cigarette use beyond cigarette smoking cessation: A concept mapping approach.
Soule, Eric K; Rosas, Scott R; Nasim, Aashir
2016-05-01
Electronic cigarettes (ECIGs) continue to grow in popularity, however, limited research has examined reasons for ECIG use. This study used an integrated, mixed-method participatory research approach called concept mapping (CM) to characterize and describe adults' reasons for using ECIGs. A total of 108 adults completed a multi-module online CM study that consisted of brainstorming statements about their reasons for ECIG use, sorting each statement into conceptually similar categories, and then rating each statement based on whether it represented a reason why they have used an ECIG in the past month. Participants brainstormed a total of 125 unique statements related to their reasons for ECIG use. Multivariate analyses generated a map revealing 11, interrelated components or domains that characterized their reasons for use. Importantly, reasons related to Cessation Methods, Perceived Health Benefits, Private Regard, Convenience and Conscientiousness were rated significantly higher than other categories/types of reasons related to ECIG use (p<.05). There also were significant model differences in participants' endorsement of reasons based on their demography and ECIG behaviors. This study shows that ECIG users are motivated to use ECIGs for many reasons. ECIG regulations should address these reasons for ECIG use in addition to smoking cessation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Overstory cohort survival in an Appalachian hardwood deferment cutting: 35-year results
John P. Brown; Melissa A. Thomas-Van Gundy; Thomas M. Schuler
2018-01-01
Deferment cutting is a two-aged regeneration method in which the majority of the stand is harvested and a dispersed component of overstory treesâapproximately 15â20% of the basal area â is retained for at least onehalf rotation and up one full rotation for reasons other than regeneration. Careful consideration of residual trees, in both characteristics and harvesting,...
NASA Astrophysics Data System (ADS)
Vámos, Tibor
The gist of the paper is the fundamental uncertain nature of all kinds of uncertainties and consequently a critical epistemic review of historical and recent approaches, computational methods, algorithms. The review follows the development of the notion from the beginnings of thinking, via the Aristotelian and Skeptic view, the medieval nominalism and the influential pioneering metaphors of ancient India and Persia to the birth of modern mathematical disciplinary reasoning. Discussing the models of uncertainty, e.g. the statistical, other physical and psychological background we reach a pragmatic model related estimation perspective, a balanced application orientation for different problem areas. Data mining, game theories and recent advances in approximation algorithms are discussed in this spirit of modest reasoning.
Measurement of Antenna Bore-Sight Gain
NASA Technical Reports Server (NTRS)
Fortinberry, Jarrod; Shumpert, Thomas
2016-01-01
The absolute or free-field gain of a simple antenna can be approximated using standard antenna theory formulae or for a more accurate prediction, numerical methods may be employed to solve for antenna parameters including gain. Both of these methods will result in relatively reasonable estimates but in practice antenna gain is usually verified and documented via measurements and calibration. In this paper, a relatively simple and low-cost, yet effective means of determining the bore-sight free-field gain of a VHF/UHF antenna is proposed by using the Brewster angle relationship.
Analysis and control of hourglass instabilities in underintegrated linear and nonlinear elasticity
NASA Technical Reports Server (NTRS)
Jacquotte, Olivier P.; Oden, J. Tinsley
1994-01-01
Methods are described to identify and correct a bad finite element approximation of the governing operator obtained when under-integration is used in numerical code for several model problems: the Poisson problem, the linear elasticity problem, and for problems in the nonlinear theory of elasticity. For each of these problems, the reason for the occurrence of instabilities is given, a way to control or eliminate them is presented, and theorems of existence, uniqueness, and convergence for the given methods are established. Finally, numerical results are included which illustrate the theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D.; Wilson, Paul P. H.
In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less
Biondo, Elliott D.; Wilson, Paul P. H.
2017-05-08
In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less
Knowledge and intelligent computing system in medicine.
Pandey, Babita; Mishra, R B
2009-03-01
Knowledge-based systems (KBS) and intelligent computing systems have been used in the medical planning, diagnosis and treatment. The KBS consists of rule-based reasoning (RBR), case-based reasoning (CBR) and model-based reasoning (MBR) whereas intelligent computing method (ICM) encompasses genetic algorithm (GA), artificial neural network (ANN), fuzzy logic (FL) and others. The combination of methods in KBS such as CBR-RBR, CBR-MBR and RBR-CBR-MBR and the combination of methods in ICM is ANN-GA, fuzzy-ANN, fuzzy-GA and fuzzy-ANN-GA. The combination of methods from KBS to ICM is RBR-ANN, CBR-ANN, RBR-CBR-ANN, fuzzy-RBR, fuzzy-CBR and fuzzy-CBR-ANN. In this paper, we have made a study of different singular and combined methods (185 in number) applicable to medical domain from mid 1970s to 2008. The study is presented in tabular form, showing the methods and its salient features, processes and application areas in medical domain (diagnosis, treatment and planning). It is observed that most of the methods are used in medical diagnosis very few are used for planning and moderate number in treatment. The study and its presentation in this context would be helpful for novice researchers in the area of medical expert system.
NASA Astrophysics Data System (ADS)
Tan, Jun; Song, Peng; Li, Jinshan; Wang, Lei; Zhong, Mengxuan; Zhang, Xiaobo
2017-06-01
The surface-related multiple elimination (SRME) method is based on feedback formulation and has become one of the most preferred multiple suppression methods used. However, some differences are apparent between the predicted multiples and those in the source seismic records, which may result in conventional adaptive multiple subtraction methods being barely able to effectively suppress multiples in actual production. This paper introduces a combined adaptive multiple attenuation method based on the optimized event tracing technique and extended Wiener filtering. The method firstly uses multiple records predicted by SRME to generate a multiple velocity spectrum, then separates the original record to an approximate primary record and an approximate multiple record by applying the optimized event tracing method and short-time window FK filtering method. After applying the extended Wiener filtering method, residual multiples in the approximate primary record can then be eliminated and the damaged primary can be restored from the approximate multiple record. This method combines the advantages of multiple elimination based on the optimized event tracing method and the extended Wiener filtering technique. It is an ideal method for suppressing typical hyperbolic and other types of multiples, with the advantage of minimizing damage of the primary. Synthetic and field data tests show that this method produces better multiple elimination results than the traditional multi-channel Wiener filter method and is more suitable for multiple elimination in complicated geological areas.
A Case-Based Reasoning Method with Rank Aggregation
NASA Astrophysics Data System (ADS)
Sun, Jinhua; Du, Jiao; Hu, Jian
2018-03-01
In order to improve the accuracy of case-based reasoning (CBR), this paper addresses a new CBR framework with the basic principle of rank aggregation. First, the ranking methods are put forward in each attribute subspace of case. The ordering relation between cases on each attribute is got between cases. Then, a sorting matrix is got. Second, the similar case retrieval process from ranking matrix is transformed into a rank aggregation optimal problem, which uses the Kemeny optimal. On the basis, a rank aggregation case-based reasoning algorithm, named RA-CBR, is designed. The experiment result on UCI data sets shows that case retrieval accuracy of RA-CBR algorithm is higher than euclidean distance CBR and mahalanobis distance CBR testing.So we can get the conclusion that RA-CBR method can increase the performance and efficiency of CBR.
NASA Astrophysics Data System (ADS)
Hall, D. J.; Skottfelt, J.; Soman, M. R.; Bush, N.; Holland, A.
2017-12-01
Charge-Coupled Devices (CCDs) have been the detector of choice for imaging and spectroscopy in space missions for several decades, such as those being used for the Euclid VIS instrument and baselined for the SMILE SXI. Despite the many positive properties of CCDs, such as the high quantum efficiency and low noise, when used in a space environment the detectors suffer damage from the often-harsh radiation environment. High energy particles can create defects in the silicon lattice which act to trap the signal electrons being transferred through the device, reducing the signal measured and effectively increasing the noise. We can reduce the impact of radiation on the devices through four key methods: increased radiation shielding, device design considerations, optimisation of operating conditions, and image correction. Here, we concentrate on device design operations, investigating the impact of narrowing the charge-transfer channel in the device with the aim of minimising the impact of traps during readout. Previous studies for the Euclid VIS instrument considered two devices, the e2v CCD204 and CCD273, the serial register of the former having a 50 μm channel and the latter having a 20 μm channel. The reduction in channel width was previously modelled to give an approximate 1.6× reduction in charge storage volume, verified experimentally to have a reduction in charge transfer inefficiency of 1.7×. The methods used to simulate the reduction approximated the charge cloud to a sharp-edged volume within which the probability of capture by traps was 100%. For high signals and slow readout speeds, this is a reasonable approximation. However, for low signals and higher readout speeds, the approximation falls short. Here we discuss a new method of simulating and calculating charge storage variations with device design changes, considering the absolute probability of capture across the pixel, bringing validity to all signal sizes and readout speeds. Using this method, we can optimise the device design to suffer minimum impact from radiation damage effects, here using detector development for the SMILE mission to demonstrate the process.
Liu, Rentao; Jiang, Jiping; Guo, Liang; Shi, Bin; Liu, Jie; Du, Zhaolin; Wang, Peng
2016-06-01
In-depth filtering of emergency disposal technology (EDT) and materials has been required in the process of environmental pollution emergency disposal. However, an urgent problem that must be solved is how to quickly and accurately select the most appropriate materials for treating a pollution event from the existing spill control and clean-up materials (SCCM). To meet this need, the following objectives were addressed in this study. First, the material base and a case base for environment pollution emergency disposal were established to build a foundation and provide material for SCCM screening. Second, the multiple case-based reasoning model method with a difference-driven revision strategy (DDRS-MCBR) was applied to improve the original dual case-based reasoning model method system, and screening and decision-making was performed for SCCM using this model. Third, an actual environmental pollution accident from 2012 was used as a case study to verify the material base, case base, and screening model. The results demonstrated that the DDRS-MCBR method was fast, efficient, and practical. The DDRS-MCBR method changes the passive situation in which the choice of SCCM screening depends only on the subjective experience of the decision maker and offers a new approach to screening SCCM.
Reasoning with Vectors: A Continuous Model for Fast Robust Inference.
Widdows, Dominic; Cohen, Trevor
2015-10-01
This paper describes the use of continuous vector space models for reasoning with a formal knowledge base. The practical significance of these models is that they support fast, approximate but robust inference and hypothesis generation, which is complementary to the slow, exact, but sometimes brittle behavior of more traditional deduction engines such as theorem provers. The paper explains the way logical connectives can be used in semantic vector models, and summarizes the development of Predication-based Semantic Indexing, which involves the use of Vector Symbolic Architectures to represent the concepts and relationships from a knowledge base of subject-predicate-object triples. Experiments show that the use of continuous models for formal reasoning is not only possible, but already demonstrably effective for some recognized informatics tasks, and showing promise in other traditional problem areas. Examples described in this paper include: predicting new uses for existing drugs in biomedical informatics; removing unwanted meanings from search results in information retrieval and concept navigation; type-inference from attributes; comparing words based on their orthography; and representing tabular data, including modelling numerical values. The algorithms and techniques described in this paper are all publicly released and freely available in the Semantic Vectors open-source software package.
Reasoning with Vectors: A Continuous Model for Fast Robust Inference
Widdows, Dominic; Cohen, Trevor
2015-01-01
This paper describes the use of continuous vector space models for reasoning with a formal knowledge base. The practical significance of these models is that they support fast, approximate but robust inference and hypothesis generation, which is complementary to the slow, exact, but sometimes brittle behavior of more traditional deduction engines such as theorem provers. The paper explains the way logical connectives can be used in semantic vector models, and summarizes the development of Predication-based Semantic Indexing, which involves the use of Vector Symbolic Architectures to represent the concepts and relationships from a knowledge base of subject-predicate-object triples. Experiments show that the use of continuous models for formal reasoning is not only possible, but already demonstrably effective for some recognized informatics tasks, and showing promise in other traditional problem areas. Examples described in this paper include: predicting new uses for existing drugs in biomedical informatics; removing unwanted meanings from search results in information retrieval and concept navigation; type-inference from attributes; comparing words based on their orthography; and representing tabular data, including modelling numerical values. The algorithms and techniques described in this paper are all publicly released and freely available in the Semantic Vectors open-source software package.1 PMID:26582967
Evaluating significance in linear mixed-effects models in R.
Luke, Steven G
2017-08-01
Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.
Gai, Litao; Bilige, Sudao; Jie, Yingmo
2016-01-01
In this paper, we successfully obtained the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation based on the Lie symmetry, the extended tanh method and the homotopy perturbation method. In first part, we obtained the symmetries of the (2 + 1)-dimensional KP equation based on the Wu-differential characteristic set algorithm and reduced it. In the second part, we constructed the abundant exact travelling wave solutions by using the extended tanh method. These solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions respectively. It should be noted that when the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Finally, we apply the homotopy perturbation method to obtain the approximate analytic solutions based on four kinds of initial conditions.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
This paper describes an accurate economical method for generating approximations to the kernel of the integral equation relating unsteady pressure to normalwash in nonplanar flow. The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the non elementary integrals in the kernel by exponential approximations and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. Coefficients for 8, 12, 24, and 72 term approximations are tabulated in the report. Also, since the method is automated, it can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
ERIC Educational Resources Information Center
Akerson, Valarie L.; Carter, Ingrid S.; Park Rogers, Meredith A.; Pongsanon, Khemmawadee
2018-01-01
In this mixed methods study, the researchers developed a video-based measure called a "Prediction Assessment" to determine preservice elementary teachers' abilities to predict students' scientific reasoning. The instrument is based on teachers' need to develop pedagogical content knowledge for teaching science. Developing a knowledge…
Statistical distributions of ultra-low dose CT sinograms and their fundamental limits
NASA Astrophysics Data System (ADS)
Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream
Effects of practice on the Wechsler Adult Intelligence Scale-IV across 3- and 6-month intervals.
Estevis, Eduardo; Basso, Michael R; Combs, Dennis
2012-01-01
A total of 54 participants (age M = 20.9; education M = 14.9; initial Full Scale IQ M = 111.6) were administered the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) at baseline and again either 3 or 6 months later. Scores on the Full Scale IQ, Verbal Comprehension, Working Memory, Perceptual Reasoning, Processing Speed, and General Ability Indices improved approximately 7, 5, 4, 5, 9, and 6 points, respectively, and increases were similar regardless of whether the re-examination occurred over 3- or 6-month intervals. Reliable change indices (RCI) were computed using the simple difference and bivariate regression methods, providing estimated base rates of change across time. The regression method provided more accurate estimates of reliable change than did the simple difference between baseline and follow-up scores. These findings suggest that prior exposure to the WAIS-IV results in significant score increments. These gains reflect practice effects instead of genuine intellectual changes, which may lead to errors in clinical judgment.
A screening tool for delineating subregions of steady recharge within groundwater models
Dickinson, Jesse; Ferré, T.P.A.; Bakker, Mark; Crompton, Becky
2014-01-01
We have developed a screening method for simplifying groundwater models by delineating areas within the domain that can be represented using steady-state groundwater recharge. The screening method is based on an analytical solution for the damping of sinusoidal infiltration variations in homogeneous soils in the vadose zone. The damping depth is defined as the depth at which the flux variation damps to 5% of the variation at the land surface. Groundwater recharge may be considered steady where the damping depth is above the depth of the water table. The analytical solution approximates the vadose zone diffusivity as constant, and we evaluated when this approximation is reasonable. We evaluated the analytical solution through comparison of the damping depth computed by the analytic solution with the damping depth simulated by a numerical model that allows variable diffusivity. This comparison showed that the screening method conservatively identifies areas of steady recharge and is more accurate when water content and diffusivity are nearly constant. Nomograms of the damping factor (the ratio of the flux amplitude at any depth to the amplitude at the land surface) and the damping depth were constructed for clay and sand for periodic variations between 1 and 365 d and flux means and amplitudes from nearly 0 to 1 × 10−3 m d−1. We applied the screening tool to Central Valley, California, to identify areas of steady recharge. A MATLAB script was developed to compute the damping factor for any soil and any sinusoidal flux variation.
On the unreasonable effectiveness of the post-Newtonian approximation in gravitational physics
Will, Clifford M.
2011-01-01
The post-Newtonian approximation is a method for solving Einstein’s field equations for physical systems in which motions are slow compared to the speed of light and where gravitational fields are weak. Yet it has proven to be remarkably effective in describing certain strong-field, fast-motion systems, including binary pulsars containing dense neutron stars and binary black hole systems inspiraling toward a final merger. The reasons for this effectiveness are largely unknown. When carried to high orders in the post-Newtonian sequence, predictions for the gravitational-wave signal from inspiraling compact binaries will play a key role in gravitational-wave detection by laser-interferometric observatories. PMID:21447714
NASA Technical Reports Server (NTRS)
Oran, W. A.; Reiss, D. A.; Berge, L. H.; Parker, H. W.
1979-01-01
The acoustic fields and levitation forces produced along the axis of a single-axis resonance system were measured. The system consisted of a St. Clair generator and a planar reflector. The levitation force was measured for bodies of various sizes and geometries (i.e., spheres, cylinders, and discs). The force was found to be roughly proportional to the volume of the body until the characteristic body radius reaches approximately 2/k (k = wave number). The acoustic pressures along the axis were modeled using Huygens principle and a method of imaging to approximate multiple reflections. The modeled pressures were found to be in reasonable agreement with those measured with a calibrated microphone.
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
Schillaci, Michael A; Schillaci, Mario E
2009-02-01
The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.
Improving real-time efficiency of case-based reasoning for medical diagnosis.
Park, Yoon-Joo
2014-01-01
Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.
Approximation algorithms for planning and control
NASA Technical Reports Server (NTRS)
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
Heuristic analogy in Ars Conjectandi: From Archimedes' De Circuli Dimensione to Bernoulli's theorem.
Campos, Daniel G
2018-02-01
This article investigates the way in which Jacob Bernoulli proved the main mathematical theorem that undergirds his art of conjecturing-the theorem that founded, historically, the field of mathematical probability. It aims to contribute a perspective into the question of problem-solving methods in mathematics while also contributing to the comprehension of the historical development of mathematical probability. It argues that Bernoulli proved his theorem by a process of mathematical experimentation in which the central heuristic strategy was analogy. In this context, the analogy functioned as an experimental hypothesis. The article expounds, first, Bernoulli's reasoning for proving his theorem, describing it as a process of experimentation in which hypothesis-making is crucial. Next, it investigates the analogy between his reasoning and Archimedes' approximation of the value of π, by clarifying both Archimedes' own experimental approach to the said approximation and its heuristic influence on Bernoulli's problem-solving strategy. The discussion includes some general considerations about analogy as a heuristic technique to make experimental hypotheses in mathematics. Copyright © 2018 Elsevier Ltd. All rights reserved.
Barua, Shaibal; Begum, Shahina; Ahmed, Mobyen Uddin
2015-01-01
Machine learning algorithms play an important role in computer science research. Recent advancement in sensor data collection in clinical sciences lead to a complex, heterogeneous data processing, and analysis for patient diagnosis and prognosis. Diagnosis and treatment of patients based on manual analysis of these sensor data are difficult and time consuming. Therefore, development of Knowledge-based systems to support clinicians in decision-making is important. However, it is necessary to perform experimental work to compare performances of different machine learning methods to help to select appropriate method for a specific characteristic of data sets. This paper compares classification performance of three popular machine learning methods i.e., case-based reasoning, neutral networks and support vector machine to diagnose stress of vehicle drivers using finger temperature and heart rate variability. The experimental results show that case-based reasoning outperforms other two methods in terms of classification accuracy. Case-based reasoning has achieved 80% and 86% accuracy to classify stress using finger temperature and heart rate variability. On contrary, both neural network and support vector machine have achieved less than 80% accuracy by using both physiological signals.
Proportional Reasoning and the Visually Impaired
ERIC Educational Resources Information Center
Hilton, Geoff; Hilton, Annette; Dole, Shelley L.; Goos, Merrilyn; O'Brien, Mia
2012-01-01
Proportional reasoning is an important aspect of formal thinking that is acquired during the developmental years that approximate the middle years of schooling. Students who fail to acquire sound proportional reasoning often experience difficulties in subjects that require quantitative thinking, such as science, technology, engineering, and…
Low-order modeling of internal heat transfer in biomass particle pyrolysis
Wiggins, Gavin M.; Daw, C. Stuart; Ciesielski, Peter N.
2016-05-11
We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. Here, we conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulatemore » biomass particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less
Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory
Liu, C.; Liu, J.; Yao, Y. X.; ...
2017-01-16
Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less
Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C.; Liu, J.; Yao, Y. X.
Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less
Low-Order Modeling of Internal Heat Transfer in Biomass Particle Pyrolysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiggins, Gavin M.; Ciesielski, Peter N.; Daw, C. Stuart
2016-06-16
We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. We conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulate biomassmore » particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less
Application of artifical intelligence principles to the analysis of "crazy" speech.
Garfield, D A; Rapp, C
1994-04-01
Artificial intelligence computer simulation methods can be used to investigate psychotic or "crazy" speech. Here, symbolic reasoning algorithms establish semantic networks that schematize speech. These semantic networks consist of two main structures: case frames and object taxonomies. Node-based reasoning rules apply to object taxonomies and pathway-based reasoning rules apply to case frames. Normal listeners may recognize speech as "crazy talk" based on violations of node- and pathway-based reasoning rules. In this article, three separate segments of schizophrenic speech illustrate violations of these rules. This artificial intelligence approach is compared and contrasted with other neurolinguistic approaches and is discussed as a conceptual link between neurobiological and psychodynamic understandings of psychopathology.
On the derivation of approximations to cellular automata models and the assumption of independence.
Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V
2014-07-01
Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.
The role of electron heat flux in guide-field magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hesse, Michael; Kuznetsova, Masha; Birn, Joachim
2004-12-01
A combination of analytical theory and particle-in-cell simulations are employed in order to investigate the electron dynamics near and at the site of guide field magnetic reconnection. A detailed analysis of the contributions to the reconnection electric field shows that both bulk inertia and pressure-based quasiviscous processes are important for the electrons. Analytic scaling demonstrates that conventional approximations for the electron pressure tensor behavior in the dissipation region fail, and that heat flux contributions need to be accounted for. Based on the evolution equation of the heat flux three tensor, which is derived in this paper, an approximate form ofmore » the relevant heat flux contributions to the pressure tensor is developed, which reproduces the numerical modeling result reasonably well. Based on this approximation, it is possible to develop a scaling of the electron current layer in the central dissipation region. It is shown that the pressure tensor contributions become important at the scale length defined by the electron Larmor radius in the guide magnetic field.« less
Meta-regression approximations to reduce publication selection bias.
Stanley, T D; Doucouliagos, Hristos
2014-03-01
Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy. Copyright © 2013 John Wiley & Sons, Ltd.
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
Steyer, Andrew J.; Van Vleck, Erik S.
2018-04-13
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steyer, Andrew J.; Van Vleck, Erik S.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions
NASA Technical Reports Server (NTRS)
Gilland, James H.
1991-01-01
The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.
The application of hybrid artificial intelligence systems for forecasting
NASA Astrophysics Data System (ADS)
Lees, Brian; Corchado, Juan
1999-03-01
The results to date are presented from an ongoing investigation, in which the aim is to combine the strengths of different artificial intelligence methods into a single problem solving system. The premise underlying this research is that a system which embodies several cooperating problem solving methods will be capable of achieving better performance than if only a single method were employed. The work has so far concentrated on the combination of case-based reasoning and artificial neural networks. The relative merits of artificial neural networks and case-based reasoning problem solving paradigms, and their combination are discussed. The integration of these two AI problem solving methods in a hybrid systems architecture, such that the neural network provides support for learning from past experience in the case-based reasoning cycle, is then presented. The approach has been applied to the task of forecasting the variation of physical parameters of the ocean. Results obtained so far from tests carried out in the dynamic oceanic environment are presented.
Accelerating cross-validation with total variation and its application to super-resolution imaging
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Ikeda, Shiro; Akiyama, Kazunori; Kabashima, Yoshiyuki
2017-12-01
We develop an approximation formula for the cross-validation error (CVE) of a sparse linear regression penalized by ℓ_1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.
Thermometric titration of acids in pyridine.
Vidal, R; Mukherjee, L M
1974-04-01
Thermometric titration of HClO(4), HI, HNO(3), HBr, picric acid o-nitrobenzoic acid, 2,4- and 2,5-dinitrophenol, acetic acid and benzoic acid have been attempted in pyridine as solvent, using 1,3-diphenylguanidine as the base. Except in the case of 2,5-dinitrophenol, acetic acid and benzoic acid, the results are, in general, reasonably satisfactory. The approximate molar heats of neutralization have been calculated.
Young's moduli of carbon materials investigated by various classical molecular dynamics schemes
NASA Astrophysics Data System (ADS)
Gayk, Florian; Ehrens, Julian; Heitmann, Tjark; Vorndamme, Patrick; Mrugalla, Andreas; Schnack, Jürgen
2018-05-01
For many applications classical carbon potentials together with classical molecular dynamics are employed to calculate structures and physical properties of such carbon-based materials where quantum mechanical methods fail either due to the excessive size, irregular structure or long-time dynamics. Although such potentials, as for instance implemented in LAMMPS, yield reasonably accurate bond lengths and angles for several carbon materials such as graphene, it is not clear how accurate they are in terms of mechanical properties such as for instance Young's moduli. We performed large-scale classical molecular dynamics investigations of three carbon-based materials using the various potentials implemented in LAMMPS as well as the EDIP potential of Marks. We show how the Young's moduli vary with classical potentials and compare to experimental results. Since classical descriptions of carbon are bound to be approximations it is not astonishing that different realizations yield differing results. One should therefore carefully check for which observables a certain potential is suited. Our aim is to contribute to such a clarification.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers
Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin
2017-01-01
Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513
Medicare Part D Claims Rejections for Nursing Home Residents, 2006 to 2010
Stevenson, David G.; Keohane, Laura M.; Mitchell, Susan L.; Zarowitz, Barbara J.; Huskamp, Haiden A.
2013-01-01
Objectives Much has been written about trends in Medicare Part D formulary design and consumers’ choice of plans, but little is known about the magnitude of claims rejections or their clinical and administrative implications. Our objective was to study the overall rate at which Part D claims are rejected, whether these rates differ across plans, drugs, and medication classes, and how these rejection rates and reasons have evolved over time. Study Design and Methods We performed descriptive analyses of data on paid and rejected Part D claims submitted by 1 large national long-term care pharmacy from 2006 to 2010. In each of the 5 study years, data included approximately 450,000 Medicare beneficiaries living in long-term care settings with approximately 4 million Part D drug claims. Claims rejection rates and reasons for rejection are tabulated for each study year at the plan, drug, and class levels. Results Nearly 1 in 6 drug claims was rejected during the first 5 years of the Medicare Part D program, and this rate has increased over time. Rejection rates and reasons for rejection varied substantially across drug products and Part D plans. Moreover, the reasons for denials evolved over our study period. Coverage has become less of a factor in claims rejections than it was initially and other formulary tools such as drug utilization review, quantity-related coverage limits, and prior authorization are increasingly used to deny claims. Conclusions Examining claims rejection rates can provide important supplemental information to assess plans’ generosity of coverage and to identify potential areas of concern. PMID:23145808
FFT swept filtering: a bias-free method for processing fringe signals in absolute gravimeters
NASA Astrophysics Data System (ADS)
Křen, Petr; Pálinkáš, Vojtech; Mašika, Pavel; Val'ko, Miloš
2018-05-01
Absolute gravimeters, based on laser interferometry, are widely used for many applications in geoscience and metrology. Although currently the most accurate FG5 and FG5X gravimeters declare standard uncertainties at the level of 2-3 μGal, their inherent systematic errors affect the gravity reference determined by international key comparisons based predominately on the use of FG5-type instruments. The measurement results for FG5-215 and FG5X-251 clearly showed that the measured g-values depend on the size of the fringe signal and that this effect might be approximated by a linear regression with a slope of up to 0.030 μGal/mV . However, these empirical results do not enable one to identify the source of the effect or to determine a reasonable reference fringe level for correcting g-values in an absolute sense. Therefore, both gravimeters were equipped with new measuring systems (according to Křen et al. in Metrologia 53:27-40, 2016. https://doi.org/10.1088/0026-1394/53/1/27 applied for FG5), running in parallel with the original systems. The new systems use an analogue-to-digital converter HS5 to digitize the fringe signal and a new method of fringe signal analysis based on FFT swept bandpass filtering. We demonstrate that the source of the fringe size effect is connected to a distortion of the fringe signal due to the electronic components used in the FG5(X) gravimeters. To obtain a bias-free g-value, the FFT swept method should be applied for the determination of zero-crossings. A comparison of g-values obtained from the new and the original systems clearly shows that the original system might be biased by approximately 3-5 μGal due to improperly distorted fringe signal processing.
Nolte, Guido
2003-11-21
The equation for the magnetic lead field for a given magnetoencephalography (MEG) channel is well known for arbitrary frequencies omega but is not directly applicable to MEG in the quasi-static approximation. In this paper we derive an equation for omega = 0 starting from the very definition of the lead field instead of using Helmholtz's reciprocity theorems. The results are (a) the transpose of the conductivity times the lead field is divergence-free, and (b) the lead field differs from the one in any other volume conductor by a gradient of a scalar function. Consequently, for a piecewise homogeneous and isotropic volume conductor, the lead field is always tangential at the outermost surface. Based on this theoretical result, we formulated a simple and fast method for the MEG forward calculation for one shell of arbitrary shape: we correct the corresponding lead field for a spherical volume conductor by a superposition of basis functions, gradients of harmonic functions constructed here from spherical harmonics, with coefficients fitted to the boundary conditions. The algorithm was tested for a prolate spheroid of realistic shape for which the analytical solution is known. For high order in the expansion, we found the solutions to be essentially exact and for reasonable accuracies much fewer multiplications are needed than in typical implementations of the boundary element methods. The generalization to more shells is straightforward.
Caries detection: current status and future prospects using lasers
NASA Astrophysics Data System (ADS)
Longbottom, Christopher
2000-03-01
Caries detection currently occupies a good deal of attention in the arena of dental research for a number of reasons. In searching for caries detection methods with greater accuracy than conventional technique researchers have used a variety of optical methods and have increasingly turned to the use of lasers. Several laser-based methods have been and are being assessed for both imaging and disease quantification techniques. The phenomenon of fluorescence of teeth and caries in laser light and the different effects produced by different wavelengths has been investigated by a number of workers in Europe. With an argon ion laser excitation, QLF (Quantified Laser Fluorescence) demonstrated a high correlation between loss of fluorescence intensity and enamel mineral loss in white spot lesions in free smooth surface lesions, both in vitro and in vivo. Recent work with a red laser diode source (655 nm), which appears to stimulate bacterial porphyrins to fluoresce, has demonstrated that a relatively simple device based on this phenomenon can provide sensitivity and specificity values of the order of 80% in vitro and in vivo for primary caries at occlusal sites. In vitro studies using a simulated in vivo methodology indicate that the device can produce sensitivity values of the order of 90% for primary caries at approximal sites.
Grant, Sharon; Schacht, Veronika J; Escher, Beate I; Hawker, Darryl W; Gaus, Caroline
2016-03-15
Freely dissolved aqueous concentration and chemical activity are important determinants of contaminant transport, fate, and toxic potential. Both parameters are commonly quantified using Solid Phase Micro-Extraction (SPME) based on a sorptive polymer such as polydimethylsiloxane (PDMS). This method requires the PDMS-water partition constants, KPDMSw, or activity coefficient to be known. For superhydrophobic contaminants (log KOW >6), application of existing methods to measure these parameters is challenging, and independent measures to validate KPDMSw values would be beneficial. We developed a simple, rapid method to directly measure PDMS solubilities of solid contaminants, SPDMS(S), which together with literature thermodynamic properties was then used to estimate KPDMSw and activity coefficients in PDMS. PDMS solubility for the test compounds (log KOW 7.2-8.3) ranged over 3 orders of magnitude (4.1-5700 μM), and was dependent on compound class. For polychlorinated biphenyls (PCBs) and polychlorinated dibenzo-p-dioxins (PCDDs), solubility-derived KPDMSw increased linearly with hydrophobicity, consistent with trends previously reported for less chlorinated congeners. In contrast, subcooled liquid PDMS solubilities, SPDMS(L), were approximately constant within a compound class. SPDMS(S) and KPDMSw can therefore be predicted for a compound class with reasonable robustness based solely on the class-specific SPDMS(L) and a particular congener's entropy of fusion, melting point, and aqueous solubility.
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1985-01-01
Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.
NASA Astrophysics Data System (ADS)
Bi, Lei; Yang, Ping
2016-07-01
The accuracy of the physical-geometric optics (PG-O) approximation is examined for the simulation of electromagnetic scattering by nonspherical dielectric particles. This study seeks a better understanding of the tunneling effect on the phase matrix by employing the invariant imbedding method to rigorously compute the zeroth-order Debye series, from which the tunneling efficiency and the phase matrix corresponding to the diffraction and external reflection are obtained. The tunneling efficiency is shown to be a factor quantifying the relative importance of the tunneling effect over the Fraunhofer diffraction near the forward scattering direction. Due to the tunneling effect, different geometries with the same projected cross section might have different diffraction patterns, which are traditionally assumed to be identical according to the Babinet principle. For particles with a fixed orientation, the PG-O approximation yields the external reflection pattern with reasonable accuracy, but ordinarily fails to predict the locations of peaks and minima in the diffraction pattern. The larger the tunneling efficiency, the worse the PG-O accuracy is at scattering angles less than 90°. If the particles are assumed to be randomly oriented, the PG-O approximation yields the phase matrix close to the rigorous counterpart, primarily due to error cancellations in the orientation-average process. Furthermore, the PG-O approximation based on an electric field volume-integral equation is shown to usually be much more accurate than the Kirchhoff surface integral equation at side-scattering angles, particularly when the modulus of the complex refractive index is close to unity. Finally, tunneling efficiencies are tabulated for representative faceted particles.
Liu, Kuan-Yu; Herbert, John M
2017-10-28
Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H 2 O) 37 , four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H 2 O) 20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.
NASA Astrophysics Data System (ADS)
Liu, Kuan-Yu; Herbert, John M.
2017-10-01
Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H2O)37, four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H2O)20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.
Clinical reasoning and its application to nursing: concepts and research studies.
Banning, Maggi
2008-05-01
Clinical reasoning may be defined as "the process of applying knowledge and expertise to a clinical situation to develop a solution" [Carr, S., 2004. A framework for understanding clinical reasoning in community nursing. J. Clin. Nursing 13 (7), 850-857]. Several forms of reasoning exist each has its own merits and uses. Reasoning involves the processes of cognition or thinking and metacognition. In nursing, clinical reasoning skills are an expected component of expert and competent practise. Nurse research studies have identified concepts, processes and thinking strategies that might underpin the clinical reasoning used by pre-registration nurses and experienced nurses. Much of the available research on reasoning is based on the use of the think aloud approach. Although this is a useful method, it is dependent on ability to describe and verbalise the reasoning process. More nursing research is needed to explore the clinical reasoning process. Investment in teaching and learning methods is needed to enhance clinical reasoning skills in nurses.
A decision method based on uncertainty reasoning of linguistic truth-valued concept lattice
NASA Astrophysics Data System (ADS)
Yang, Li; Xu, Yang
2010-04-01
Decision making with linguistic information is a research hotspot now. This paper begins by establishing the theory basis for linguistic information processing and constructs the linguistic truth-valued concept lattice for a decision information system, and further utilises uncertainty reasoning to make the decision. That is, we first utilise the linguistic truth-valued lattice implication algebra to unify the different kinds of linguistic expressions; second, we construct the linguistic truth-valued concept lattice and decision concept lattice according to the concrete decision information system and third, we establish the internal and external uncertainty reasoning methods and talk about the rationality of them. We apply these uncertainty reasoning methods into decision making and present some generation methods of decision rules. In the end, we give an application of this decision method by an example.
Analysis of collapse in flattening a micro-grooved heat pipe by lateral compression
NASA Astrophysics Data System (ADS)
Li, Yong; He, Ting; Zeng, Zhixin
2012-11-01
The collapse of thin-walled micro-grooved heat pipes is a common phenomenon in the tube flattening process, which seriously influences the heat transfer performance and appearance of heat pipe. At present, there is no other better method to solve this problem. A new method by heating the heat pipe is proposed to eliminate the collapse during the flattening process. The effectiveness of the proposed method is investigated through a theoretical model, a finite element(FE) analysis, and experimental method. Firstly, A theoretical model based on a deformation model of six plastic hinges and the Antoine equation of the working fluid is established to analyze the collapse of thin walls at different temperatures. Then, the FE simulation and experiments of flattening process at different temperatures are carried out and compared with theoretical model. Finally, the FE model is followed to study the loads of the plates at different temperatures and heights of flattened heat pipes. The results of the theoretical model conform to those of the FE simulation and experiments in the flattened zone. The collapse occurs at room temperature. As the temperature increases, the collapse decreases and finally disappears at approximately 130 °C for various heights of flattened heat pipes. The loads of the moving plate increase as the temperature increases. Thus, the reasonable temperature for eliminating the collapse and reducing the load is approximately 130 °C. The advantage of the proposed method is that the collapse is reduced or eliminated by means of the thermal deformation characteristic of heat pipe itself instead of by external support. As a result, the heat transfer efficiency of heat pipe is raised.
Research of Uncertainty Reasoning in Pineapple Disease Identification System
NASA Astrophysics Data System (ADS)
Liu, Liqun; Fan, Haifeng
In order to deal with the uncertainty of evidences mostly existing in pineapple disease identification system, a reasoning model based on evidence credibility factor was established. The uncertainty reasoning method is discussed,including: uncertain representation of knowledge, uncertain representation of rules, uncertain representation of multi-evidences and update of reasoning rules. The reasoning can fully reflect the uncertainty in disease identification and reduce the influence of subjective factors on the accuracy of the system.
NASA Astrophysics Data System (ADS)
Kerdcharoen, Teerakiat; Morokuma, Keiji
2003-05-01
An extension of the ONIOM (Own N-layered Integrated molecular Orbital and molecular Mechanics) method [M. Svensson, S. Humbel, R. D. J. Froese, T. Mutsubara, S. Sieber, and K. Morokuma, J. Phys. Chem. 100, 19357 (1996)] for simulation in the condensed phase, called ONIOM-XS (XS=eXtension to Solvation) [T. Kerdcharoen and K. Morokuma, Chem. Phys. Lett. 355, 257 (2002)], was applied to investigate the coordination of Ca2+ in liquid ammonia. A coordination number of 6 is found. Previous simulations based on pair potential or pair potential plus three-body correction gave values of 9 and 8.2, respectively. The new value is the same as the coordination number most frequently listed in the Cambridge Structural Database (CSD) and Protein Data Bank (PDB). N-Ca-N angular distribution reveals a near-octahedral coordination structure. Inclusion of many-body interactions (which amounts to 25% of the pair interactions) into the potential energy surface is essential for obtaining reasonable coordination number. Analyses of the metal coordination in water, water-ammonia mixture, and in proteins reveals that cation/ammonia solution can be used to approximate the coordination environment in proteins.
NASA Astrophysics Data System (ADS)
Riva, Fabio; Milanese, Lucio; Ricci, Paolo
2017-10-01
To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.
NASA Astrophysics Data System (ADS)
Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan
2008-02-01
One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.
NASA Astrophysics Data System (ADS)
Grib, S. A.; Leora, S. N.
2017-12-01
Macroscopic discontinuous structures observed in the solar wind are considered in the framework of magnetic hydrodynamics. The interaction of strong discontinuities is studied based on the solution of the generalized Riemann-Kochin problem. The appearance of discontinuities inside the magnetosheath after the collision of the solar wind shock wave with the bow shock front is taken into account. The propagation of secondary waves appearing in the magnetosheath is considered in the approximation of one-dimensional ideal magnetohydrodynamics. The appearance of a compression wave reflected from the magnetopause is indicated. The wave can nonlinearly break with the formation of a backward shock wave and cause the motion of the bow shock towards the Sun. The interaction between shock waves is considered with the well-known trial calculation method. It is assumed that the velocity of discontinuities in the magnetosheath in the first approximation is constant on the average. All reasonings and calculations correspond to consideration of a flow region with a velocity less than the magnetosonic speed near the Earth-Sun line. It is indicated that the results agree with the data from observations carried out on the WIND and Cluster spacecrafts.
A combinatorial approach to protein docking with flexible side chains.
Althaus, Ernst; Kohlbacher, Oliver; Lenhof, Hans-Peter; Müller, Peter
2002-01-01
Rigid-body docking approaches are not sufficient to predict the structure of a protein complex from the unbound (native) structures of the two proteins. Accounting for side chain flexibility is an important step towards fully flexible protein docking. This work describes an approach that allows conformational flexibility for the side chains while keeping the protein backbone rigid. Starting from candidates created by a rigid-docking algorithm, we demangle the side chains of the docking site, thus creating reasonable approximations of the true complex structure. These structures are ranked with respect to the binding free energy. We present two new techniques for side chain demangling. Both approaches are based on a discrete representation of the side chain conformational space by the use of a rotamer library. This leads to a combinatorial optimization problem. For the solution of this problem, we propose a fast heuristic approach and an exact, albeit slower, method that uses branch-and-cut techniques. As a test set, we use the unbound structures of three proteases and the corresponding protein inhibitors. For each of the examples, the highest-ranking conformation produced was a good approximation of the true complex structure.
Electromagnetic launch of lunar material
NASA Technical Reports Server (NTRS)
Snow, William R.; Kolm, Henry H.
1992-01-01
Lunar soil can become a source of relatively inexpensive oxygen propellant for vehicles going from low Earth orbit (LEO) to geosynchronous Earth orbit (GEO) and beyond. This lunar oxygen could replace the oxygen propellant that, in current plans for these missions, is launched from the Earth's surface and amounts to approximately 75 percent of the total mass. The reason for considering the use of oxygen produced on the Moon is that the cost for the energy needed to transport things from the lunar surface to LEO is approximately 5 percent the cost from the surface of the Earth to LEO. Electromagnetic launchers, in particular the superconducting quenchgun, provide a method of getting this lunar oxygen off the lunar surface at minimal cost. This cost savings comes from the fact that the superconducting quenchgun gets its launch energy from locally supplied, solar- or nuclear-generated electrical power. We present a preliminary design to show the main features and components of a lunar-based superconducting quenchgun for use in launching 1-ton containers of liquid oxygen, one every 2 hours. At this rate, nearly 4400 tons of liquid oxygen would be launched into low lunar orbit in a year.
Waves and rays in plano-concave laser cavities: I. Geometric modes in the paraxial approximation
NASA Astrophysics Data System (ADS)
Barré, N.; Romanelli, M.; Lebental, M.; Brunel, M.
2017-05-01
Eigenmodes of laser cavities are studied theoretically and experimentally in two companion papers, with the aim of making connections between undulatory and geometric properties of light. In this first paper, we focus on macroscopic open-cavity lasers with localized gain. The model is based on the wave equation in the paraxial approximation; experiments are conducted with a simple diode-pumped Nd:YAG laser with a variable cavity length. After recalling fundamentals of laser beam optics, we consider plano-concave cavities with on-axis or off-axis pumping, with emphasis put on degenerate cavity lengths, where modes of different order resonate at the same frequency, and combine to form surprising transverse beam profiles. Degeneracy leads to the oscillation of so-called geometric modes whose properties can be understood, to a certain extent, also within a ray optics picture. We first provide a heuristic description of these modes, based on geometric reasoning, and then show more rigorously how to derive them analytically by building wave superpositions, within the framework of paraxial wave optics. The numerical methods, based on the Fox-Li approach, are described in detail. The experimental setup, including the imaging system, is also detailed and relatively simple to reproduce. The aim is to facilitate implementation of both the numerics and of the experiments, and to show that one can have access not only to the common higher-order modes but also to more exotic patterns.
Effective Clipart Image Vectorization through Direct Optimization of Bezigons.
Yang, Ming; Chao, Hongyang; Zhang, Chi; Guo, Jun; Yuan, Lu; Sun, Jian
2016-02-01
Bezigons, i.e., closed paths composed of Bézier curves, have been widely employed to describe shapes in image vectorization results. However, most existing vectorization techniques infer the bezigons by simply approximating an intermediate vector representation (such as polygons). Consequently, the resultant bezigons are sometimes imperfect due to accumulated errors, fitting ambiguities, and a lack of curve priors, especially for low-resolution images. In this paper, we describe a novel method for vectorizing clipart images. In contrast to previous methods, we directly optimize the bezigons rather than using other intermediate representations; therefore, the resultant bezigons are not only of higher fidelity compared with the original raster image but also more reasonable because they were traced by a proficient expert. To enable such optimization, we have overcome several challenges and have devised a differentiable data energy as well as several curve-based prior terms. To improve the efficiency of the optimization, we also take advantage of the local control property of bezigons and adopt an overlapped piecewise optimization strategy. The experimental results show that our method outperforms both the current state-of-the-art method and commonly used commercial software in terms of bezigon quality.
In College and in Recovery: Reasons for Joining a Collegiate Recovery Program
ERIC Educational Resources Information Center
Laudet, Alexandre B.; Harris, Kitty; Kimball, Thomas; Winters, Ken C.; Moberg, D. Paul
2016-01-01
Objective: Collegiate Recovery Programs (CRPs), a campus-based peer support model for students recovering from substance abuse problems, grew exponentially in the past decade, yet remain unexplored. Methods: This mixed-methods study examines students' reasons for CRP enrollment to guide academic institutions and referral sources. Students (N =…
ERIC Educational Resources Information Center
White, Brian
2004-01-01
This paper presents a generally applicable method for characterizing subjects' hypothesis-testing behaviour based on a synthesis that extends on previous work. Beginning with a transcript of subjects' speech and videotape of their actions, a Reasoning Map is created that depicts the flow of their hypotheses, tests, predictions, results, and…
Temporal and Resource Reasoning for Planning, Scheduling and Execution in Autonomous Agents
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Hunsberger, Luke; Tsamardinos, Ioannis
2005-01-01
This viewgraph slide tutorial reviews methods for planning and scheduling events. The presentation reviews several methods and uses several examples of scheduling events for the successful and timely completion of the overall plan. Using constraint based models the presentation reviews planning with time, time representations in problem solving and resource reasoning.
Comparison of algorithms to quantify muscle fatigue in upper limb muscles based on sEMG signals.
Kahl, Lorenz; Hofmann, Ulrich G
2016-11-01
This work compared the performance of six different fatigue detection algorithms quantifying muscle fatigue based on electromyographic signals. Surface electromyography (sEMG) was obtained by an experiment from upper arm contractions at three different load levels from twelve volunteers. Fatigue detection algorithms mean frequency (MNF), spectral moments ratio (SMR), the wavelet method WIRM1551, sample entropy (SampEn), fuzzy approximate entropy (fApEn) and recurrence quantification analysis (RQA%DET) were calculated. The resulting fatigue signals were compared considering the disturbances incorporated in fatiguing situations as well as according to the possibility to differentiate the load levels based on the fatigue signals. Furthermore we investigated the influence of the electrode locations on the fatigue detection quality and whether an optimized channel set is reasonable. The results of the MNF, SMR, WIRM1551 and fApEn algorithms fell close together. Due to the small amount of subjects in this study significant differences could not be found. In terms of disturbances the SMR algorithm showed a slight tendency to out-perform the others. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Interpolation Method Needed for Numerical Uncertainty
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.
Measuring Distance of Fuzzy Numbers by Trapezoidal Fuzzy Numbers
NASA Astrophysics Data System (ADS)
Hajjari, Tayebeh
2010-11-01
Fuzzy numbers and more generally linguistic values are approximate assessments, given by experts and accepted by decision-makers when obtaining value that is more accurate is impossible or unnecessary. Distance between two fuzzy numbers plays an important role in linguistic decision-making. It is reasonable to define a fuzzy distance between fuzzy objects. To achieve this aim, the researcher presents a new distance measure for fuzzy numbers by means of improved centroid distance method. The metric properties are also studied. The advantage is the calculation of the proposed method is far simple than previous approaches.
Rock physics model-based prediction of shear wave velocity in the Barnett Shale formation
NASA Astrophysics Data System (ADS)
Guo, Zhiqi; Li, Xiang-Yang
2015-06-01
Predicting S-wave velocity is important for reservoir characterization and fluid identification in unconventional resources. A rock physics model-based method is developed for estimating pore aspect ratio and predicting shear wave velocity Vs from the information of P-wave velocity, porosity and mineralogy in a borehole. Statistical distribution of pore geometry is considered in the rock physics models. In the application to the Barnett formation, we compare the high frequency self-consistent approximation (SCA) method that corresponds to isolated pore spaces, and the low frequency SCA-Gassmann method that describes well-connected pore spaces. Inversion results indicate that compared to the surroundings, the Barnett Shale shows less fluctuation in the pore aspect ratio in spite of complex constituents in the shale. The high frequency method provides a more robust and accurate prediction of Vs for all the three intervals in the Barnett formation, while the low frequency method collapses for the Barnett Shale interval. Possible causes for this discrepancy can be explained by the fact that poor in situ pore connectivity and low permeability make well-log sonic frequencies act as high frequencies and thus invalidate the low frequency assumption of the Gassmann theory. In comparison, for the overlying Marble Falls and underlying Ellenburger carbonates, both the high and low frequency methods predict Vs with reasonable accuracy, which may reveal that sonic frequencies are within the transition frequencies zone due to higher pore connectivity in the surroundings.
An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.
2017-01-01
Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr; Lee, Taewon
2015-09-15
Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue compositionmore » for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite accurate under a variety of conditions. Our GPU-based fast MCS implementation took approximately 3 s to generate each angular projection for a 6 cm thick breast, which is believed to make this process acceptable for clinical applications. In addition, the clinical preferences of three radiologists were evaluated; the preference for the proposed method compared to the preference for the convolution-based method was statistically meaningful (p < 0.05, McNemar test). Conclusions: The proposed fully iterative scatter correction method and the GPU-based fast MCS using tissue-composition ratio estimation successfully improved the image quality within a reasonable computational time, which may potentially increase the clinical utility of DBT.« less
NASA Technical Reports Server (NTRS)
Lee, S. Daniel
1990-01-01
We propose a distributed agent architecture (DAA) that can support a variety of paradigms based on both traditional real-time computing and artificial intelligence. DAA consists of distributed agents that are classified into two categories: reactive and cognitive. Reactive agents can be implemented directly in Ada to meet hard real-time requirements and be deployed on on-board embedded processors. A traditional real-time computing methodology under consideration is the rate monotonic theory that can guarantee schedulability based on analytical methods. AI techniques under consideration for reactive agents are approximate or anytime reasoning that can be implemented using Bayesian belief networks as in Guardian. Cognitive agents are traditional expert systems that can be implemented in ART-Ada to meet soft real-time requirements. During the initial design of cognitive agents, it is critical to consider the migration path that would allow initial deployment on ground-based workstations with eventual deployment on on-board processors. ART-Ada technology enables this migration while Lisp-based technologies make it difficult if not impossible. In addition to reactive and cognitive agents, a meta-level agent would be needed to coordinate multiple agents and to provide meta-level control.
A comparison of transport algorithms for premixed, laminar steady state flames
NASA Technical Reports Server (NTRS)
Coffee, T. P.; Heimerl, J. M.
1980-01-01
The effects of different methods of approximating multispecies transport phenomena in models of premixed, laminar, steady state flames were studied. Five approximation methods that span a wide range of computational complexity were developed. Identical data for individual species properties were used for each method. Each approximation method is employed in the numerical solution of a set of five H2-02-N2 flames. For each flame the computed species and temperature profiles, as well as the computed flame speeds, are found to be very nearly independent of the approximation method used. This does not indicate that transport phenomena are unimportant, but rather that the selection of the input values for the individual species transport properties is more important than the selection of the method used to approximate the multispecies transport. Based on these results, a sixth approximation method was developed that is computationally efficient and provides results extremely close to the most sophisticated and precise method used.
Improving Perception to Make Distant Connections Closer
Goldstone, Robert L.; Landy, David; Brunel, Lionel C.
2011-01-01
One of the challenges for perceptually grounded accounts of high-level cognition is to explain how people make connections and draw inferences between situations that superficially have little in common. Evidence suggests that people draw these connections even without having explicit, verbalizable knowledge of their bases. Instead, the connections are based on sub-symbolic representations that are grounded in perception, action, and space. One reason why people are able to spontaneously see relations between situations that initially appear to be unrelated is that their eventual perceptions are not restricted to initial appearances. Training and strategic deployment allow our perceptual processes to deliver outputs that would have otherwise required abstract or formal reasoning. Even without people having any privileged access to the internal operations of perceptual modules, these modules can be systematically altered so as to better serve our high-level reasoning needs. Moreover, perceptually based processes can be altered in a number of ways to closely approximate formally sanctioned computations. To be concrete about mechanisms of perceptual change, we present 21 illustrations of ways in which we alter, adjust, and augment our perceptual systems with the intention of having them better satisfy our needs. PMID:22207861
NASA Astrophysics Data System (ADS)
Bervillier, C.; Boisseau, B.; Giacomini, H.
2008-02-01
The relation between the Wilson-Polchinski and the Litim optimized ERGEs in the local potential approximation is studied with high accuracy using two different analytical approaches based on a field expansion: a recently proposed genuine analytical approximation scheme to two-point boundary value problems of ordinary differential equations, and a new one based on approximating the solution by generalized hypergeometric functions. A comparison with the numerical results obtained with the shooting method is made. A similar accuracy is reached in each case. Both two methods appear to be more efficient than the usual field expansions frequently used in the current studies of ERGEs (in particular for the Wilson-Polchinski case in the study of which they fail).
Fletcher, Logan; Carruthers, Peter
2012-01-01
This article considers the cognitive architecture of human meta-reasoning: that is, metacognition concerning one's own reasoning and decision-making. The view we defend is that meta-reasoning is a cobbled-together skill comprising diverse self-management strategies acquired through individual and cultural learning. These approximate the monitoring-and-control functions of a postulated adaptive system for metacognition by recruiting mechanisms that were designed for quite other purposes. PMID:22492753
Lindsay, Kaitlin E; Rühli, Frank J; Deleon, Valerie Burke
2015-06-01
The technique of forensic facial approximation, or reconstruction, is one of many facets of the field of mummy studies. Although far from a rigorous scientific technique, evidence-based visualization of antemortem appearance may supplement radiological, chemical, histological, and epidemiological studies of ancient remains. Published guidelines exist for creating facial approximations, but few approximations are published with documentation of the specific process and references used. Additionally, significant new research has taken place in recent years which helps define best practices in the field. This case study records the facial approximation of a 3,000-year-old ancient Egyptian woman using medical imaging data and the digital sculpting program, ZBrush. It represents a synthesis of current published techniques based on the most solid anatomical and/or statistical evidence. Through this study, it was found that although certain improvements have been made in developing repeatable, evidence-based guidelines for facial approximation, there are many proposed methods still awaiting confirmation from comprehensive studies. This study attempts to assist artists, anthropologists, and forensic investigators working in facial approximation by presenting the recommended methods in a chronological and usable format. © 2015 Wiley Periodicals, Inc.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
38 CFR 3.102 - Reasonable doubt.
Code of Federal Regulations, 2010 CFR
2010-07-01
... degree of disability, or any other point, such doubt will be resolved in favor of the claimant. By reasonable doubt is meant one which exists because of an approximate balance of positive and negative...
DFT calculations of electronic and optical properties of SrS with LDA, GGA and mGGA functionals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Shatendra, E-mail: shatendra@gmai.com; Sharma, Jyotsna; Sharma, Yogita
2016-05-06
The theoretical investigations of electronic and optical properties of SrS are made using the first principle DFT calculations. The calculations are performed for the local-density approximation (LDA), generalized gradient approximation (GGA) and for an alternative form of GGA i.e. metaGGA for both rock salt type (B1, Fm3m) and cesium chloride (B2, Pm3m) structures. The band structure, density of states and optical spectra are calculated under various available functional. The calculations with LDA and GGA functional underestimate the values of band gaps with all functional, however the values with mGGA show reasonably good agreement with experimental and those calculated by usingmore » other methods.« less
NASA Technical Reports Server (NTRS)
Wheatley, John B
1935-01-01
This report presents an extension of the autogiro theory of Glauert and Lock in which the influence of a pitch varying with the blade radius is evaluated and methods of approximating the effect of blade tip losses and the influence of reversed velocities on the retreating blades are developed. A comparison of calculated and experimental results showed that most of the rotor characteristics could be calculated with reasonable accuracy, and that the type of induced flow assumed has a secondary effect upon the net rotor forces, although the flapping motion is influenced appreciably. An approximate evaluation of the effect of parasite drag on the rotor blades established the importance of including this factor in the analysis.
Teaching Scientific Reasoning to Liberal Arts Students
NASA Astrophysics Data System (ADS)
Rubbo, Louis
2014-03-01
University courses in conceptual physics and astronomy typically serve as the terminal science experience for the liberal arts student. Within this population significant content knowledge gains can be achieved by utilizing research verified pedagogical methods. However, from the standpoint of the Univeristy, students are expected to complete these courses not necessarily for the content knowledge but instead for the development of scientific reasoning skills. Results from physics education studies indicate that unless scientific reasoning instruction is made explicit students do not progress in their reasoning abilities. How do we complement the successful content based pedagogical methods with instruction that explicitly focuses on the development of scientific reasoning skills? This talk will explore methodologies that actively engages the non-science students with the explicit intent of fostering their scientific reasoning abilities.
A Jacobi collocation approximation for nonlinear coupled viscous Burgers' equation
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohamed A.; Hafez, Ramy M.
2014-02-01
This article presents a numerical approximation of the initial-boundary nonlinear coupled viscous Burgers' equation based on spectral methods. A Jacobi-Gauss-Lobatto collocation (J-GL-C) scheme in combination with the implicit Runge-Kutta-Nyström (IRKN) scheme are employed to obtain highly accurate approximations to the mentioned problem. This J-GL-C method, based on Jacobi polynomials and Gauss-Lobatto quadrature integration, reduces solving the nonlinear coupled viscous Burgers' equation to a system of nonlinear ordinary differential equation which is far easier to solve. The given examples show, by selecting relatively few J-GL-C points, the accuracy of the approximations and the utility of the approach over other analytical or numerical methods. The illustrative examples demonstrate the accuracy, efficiency, and versatility of the proposed algorithm.
Sambo, Maganga; Johnson, Paul C. D.; Hotopp, Karen; Changalucha, Joel; Cleaveland, Sarah; Kazwala, Rudovick; Lembo, Tiziana; Lugelo, Ahmed; Lushasi, Kennedy; Maziku, Mathew; Mbunda, Eberhard; Mtema, Zacharia; Sikana, Lwitiko; Townsend, Sunny E.; Hampson, Katie
2017-01-01
Rabies can be eliminated by achieving comprehensive coverage of 70% of domestic dogs during annual mass vaccination campaigns. Estimates of vaccination coverage are, therefore, required to evaluate and manage mass dog vaccination programs; however, there is no specific guidance for the most accurate and efficient methods for estimating coverage in different settings. Here, we compare post-vaccination transects, school-based surveys, and household surveys across 28 districts in southeast Tanzania and Pemba island covering rural, urban, coastal and inland settings, and a range of different livelihoods and religious backgrounds. These approaches were explored in detail in a single district in northwest Tanzania (Serengeti), where their performance was compared with a complete dog population census that also recorded dog vaccination status. Post-vaccination transects involved counting marked (vaccinated) and unmarked (unvaccinated) dogs immediately after campaigns in 2,155 villages (24,721 dogs counted). School-based surveys were administered to 8,587 primary school pupils each representing a unique household, in 119 randomly selected schools approximately 2 months after campaigns. Household surveys were conducted in 160 randomly selected villages (4,488 households) in July/August 2011. Costs to implement these coverage assessments were $12.01, $66.12, and $155.70 per village for post-vaccination transects, school-based, and household surveys, respectively. Simulations were performed to assess the effect of sampling on the precision of coverage estimation. The sampling effort required to obtain reasonably precise estimates of coverage from household surveys is generally very high and probably prohibitively expensive for routine monitoring across large areas, particularly in communities with high human to dog ratios. School-based surveys partially overcame sampling constraints, however, were also costly to obtain reasonably precise estimates of coverage. Post-vaccination transects provided precise and timely estimates of community-level coverage that could be used to troubleshoot the performance of campaigns across large areas. However, transects typically overestimated coverage by around 10%, which therefore needs consideration when evaluating the impacts of campaigns. We discuss the advantages and disadvantages of these different methods and make recommendations for how vaccination campaigns can be better monitored and managed at different stages of rabies control and elimination programs. PMID:28352630
Thermal refraction focusing in planar index-antiguided lasers.
Casperson, Lee W; Dittli, Adam; Her, Tsing-Hua
2013-03-15
Thermal refraction focusing in planar index-antiguided lasers is investigated both theoretically and experimentally. An analytical model based on zero-field approximation is presented for treating the combined effects of index antiguiding and thermal focusing. At very low pumping power, the mode is antiguided by the amplifier boundary, whereas at high pumping power it narrows due to thermal focusing. Theoretical results are in reasonable agreement with experimental data.
ERIC Educational Resources Information Center
Develaki, Maria
2017-01-01
Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and…
Improving Moral Reasoning among College Students: A Game-Based Learning Approach
ERIC Educational Resources Information Center
Huang, Wenyeh; Ho, Jonathan C.
2018-01-01
Considering a company's limited time and resources, an effective training method that improves employees' ability to make ethical decision is needed. Based on social cognitive theory, this study proposes that employing games in an ethics training program can help improve moral reasoning through actively engaging learners. The experimental design…
A concise guide to clinical reasoning.
Daly, Patrick
2018-04-30
What constitutes clinical reasoning is a disputed subject regarding the processes underlying accurate diagnosis, the importance of patient-specific versus population-based data, and the relation between virtue and expertise in clinical practice. In this paper, I present a model of clinical reasoning that identifies and integrates the processes of diagnosis, prognosis, and therapeutic decision making. The model is based on the generalized empirical method of Bernard Lonergan, which approaches inquiry with equal attention to the subject who investigates and the object under investigation. After identifying the structured operations of knowing and doing and relating these to a self-correcting cycle of learning, I correlate levels of inquiry regarding what-is-going-on and what-to-do to the practical and theoretical elements of clinical reasoning. I conclude that this model provides a methodical way to study questions regarding the operations of clinical reasoning as well as what constitute significant clinical data, clinical expertise, and virtuous health care practice. © 2018 John Wiley & Sons, Ltd.
Ontology-Based Method for Fault Diagnosis of Loaders.
Xu, Feixiang; Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei
2018-02-28
This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study.
Ontology-Based Method for Fault Diagnosis of Loaders
Liu, Xinhui; Chen, Wei; Zhou, Chen; Cao, Bingwei
2018-01-01
This paper proposes an ontology-based fault diagnosis method which overcomes the difficulty of understanding complex fault diagnosis knowledge of loaders and offers a universal approach for fault diagnosis of all loaders. This method contains the following components: (1) An ontology-based fault diagnosis model is proposed to achieve the integrating, sharing and reusing of fault diagnosis knowledge for loaders; (2) combined with ontology, CBR (case-based reasoning) is introduced to realize effective and accurate fault diagnoses following four steps (feature selection, case-retrieval, case-matching and case-updating); and (3) in order to cover the shortages of the CBR method due to the lack of concerned cases, ontology based RBR (rule-based reasoning) is put forward through building SWRL (Semantic Web Rule Language) rules. An application program is also developed to implement the above methods to assist in finding the fault causes, fault locations and maintenance measures of loaders. In addition, the program is validated through analyzing a case study. PMID:29495646
Ripple, Dean C; Montgomery, Christopher B; Hu, Zhishang
2015-02-01
Accurate counting and sizing of protein particles has been limited by discrepancies of counts obtained by different methods. To understand the bias and repeatability of techniques in common use in the biopharmaceutical community, the National Institute of Standards and Technology has conducted an interlaboratory comparison for sizing and counting subvisible particles from 1 to 25 μm. Twenty-three laboratories from industry, government, and academic institutions participated. The circulated samples consisted of a polydisperse suspension of abraded ethylene tetrafluoroethylene particles, which closely mimic the optical contrast and morphology of protein particles. For restricted data sets, agreement between data sets was reasonably good: relative standard deviations (RSDs) of approximately 25% for light obscuration counts with lower diameter limits from 1 to 5 μm, and approximately 30% for flow imaging with specified manufacturer and instrument setting. RSDs of the reported counts for unrestricted data sets were approximately 50% for both light obscuration and flow imaging. Differences between instrument manufacturers were not statistically significant for light obscuration but were significant for flow imaging. We also report a method for accounting for differences in the reported diameter for flow imaging and electrical sensing zone techniques; the method worked well for diameters greater than 15 μm. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Intelligent design of permanent magnet synchronous motor based on CBR
NASA Astrophysics Data System (ADS)
Li, Cong; Fan, Beibei
2018-05-01
Aiming at many problems in the design process of Permanent magnet synchronous motor (PMSM), such as the complexity of design process, the over reliance on designers' experience and the lack of accumulation and inheritance of design knowledge, a design method of PMSM Based on CBR is proposed in order to solve those problems. In this paper, case-based reasoning (CBR) methods of cases similarity calculation is proposed for reasoning suitable initial scheme. This method could help designers, by referencing previous design cases, to make a conceptual PMSM solution quickly. The case retain process gives the system self-enrich function which will improve the design ability of the system with the continuous use of the system.
A dynamic access control method based on QoS requirement
NASA Astrophysics Data System (ADS)
Li, Chunquan; Wang, Yanwei; Yang, Baoye; Hu, Chunyang
2013-03-01
A dynamic access control method is put forward to ensure the security of the sharing service in Cloud Manufacturing, according to the application characteristics of cloud manufacturing collaborative task. The role-based access control (RBAC) model is extended according to the characteristics of cloud manufacturing in this method. The constraints are considered, which are from QoS requirement of the task context to access control, based on the traditional static authorization. The fuzzy policy rules are established about the weighted interval value of permissions. The access control authorities of executable service by users are dynamically adjusted through the fuzzy reasoning based on the QoS requirement of task. The main elements of the model are described. The fuzzy reasoning algorithm of weighted interval value based QoS requirement is studied. An effective method is provided to resolve the access control of cloud manufacturing.
An Algorithm Using Twelve Properties of Antibiotics to Find the Recommended Antibiotics, as in CPGs
Tsopra, R.; Venot, A.; Duclos, C.
2014-01-01
Background Clinical Decision Support Systems (CDSS) incorporating justifications, updating and adjustable recommendations can considerably improve the quality of healthcare. We propose a new approach to the design of CDSS for empiric antibiotic prescription, based on implementation of the deeper medical reasoning used by experts in the development of clinical practice guidelines (CPGs), to deduce the recommended antibiotics. Methods We investigated two methods (“exclusion” versus “scoring”) for reproducing this reasoning based on antibiotic properties. Results The “exclusion” method reproduced expert reasoning the more accurately, retrieving the full list of recommended antibiotics for almost all clinical situations. Discussion This approach has several advantages: (i) it provides convincing explanations for physicians; (ii) updating could easily be incorporated into the CDSS; (iii) it can provide recommendations for clinical situations missing from CPGs. PMID:25954422
Modeling and analysis of solar distributed generation
NASA Astrophysics Data System (ADS)
Ortiz Rivera, Eduardo Ivan
Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum power point tracking algorithms. Finally, several PV power applications will be presented like circuit analysis for a load connected to two different PV arrays, speed control for a do motor connected to a PVM, and a novel single phase photovoltaic inverter system using the Z-source converter.
Model-Based Reasoning in Humans Becomes Automatic with Training.
Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J
2015-09-01
Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
Laser induced heat source distribution in bio-tissues
NASA Astrophysics Data System (ADS)
Li, Xiaoxia; Fan, Shifu; Zhao, Youquan
2006-09-01
During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.
Automatic cortical segmentation in the developing brain.
Xue, Hui; Srinivasan, Latha; Jiang, Shuzhou; Rutherford, Mary; Edwards, A David; Rueckert, Daniel; Hajnal, Jo V
2007-01-01
The segmentation of neonatal cortex from magnetic resonance (MR) images is much more challenging than the segmentation of cortex in adults. The main reason is the inverted contrast between grey matter (GM) and white matter (WM) that occurs when myelination is incomplete. This causes mislabeled partial volume voxels, especially at the interface between GM and cerebrospinal fluid (CSF). We propose a fully automatic cortical segmentation algorithm, detecting these mislabeled voxels using a knowledge-based approach and correcting errors by adjusting local priors to favor the correct classification. Our results show that the proposed algorithm corrects errors in the segmentation of both GM and WM compared to the classic EM scheme. The segmentation algorithm has been tested on 25 neonates with the gestational ages ranging from approximately 27 to 45 weeks. Quantitative comparison to the manual segmentation demonstrates good performance of the method (mean Dice similarity: 0.758 +/- 0.037 for GM and 0.794 +/- 0.078 for WM).
A theoretical study of thorium titanium-based alloys
NASA Astrophysics Data System (ADS)
Obodo, K. O.; Chetty, N.
2013-09-01
Using theoretical quantum chemical methods, we investigate the dearth of ordered alloys involving thorium and titanium. Whereas both these elements are known to alloy very readily with various other elements, for example with oxygen, current experimental data suggests that Th and Ti do not alloy very readily with each other. In this work, we consider a variety of ordered alloys at varying stoichiometries involving these elements within the framework of density functional theory using the generalized gradient approximation for the exchange and correlation functional. By probing the energetics, electronic, phonon and elastic properties of these systems, we confirm the scarcity of ordered alloys involving Th and Ti, since for a variety of reasons many of the systems that we considered were found to be unfavorable. However, our investigations resulted in one plausible ordered structure: We propose ThTi3 in the Cr3Si structure as a metastable ordered alloy.
Spacecraft self-contamination due to back-scattering of outgas products
NASA Technical Reports Server (NTRS)
Robertson, S. J.
1976-01-01
The back-scattering of outgas contamination near an orbiting spacecraft due to intermolecular collisions was analyzed. Analytical tools were developed for making reasonably accurate quantitative estimates of the outgas contamination return flux, given a knowledge of the pertinent spacecraft and orbit conditions. Two basic collision mechanisms were considered: (1) collisions involving only outgas molecules (self-scattering) and (2) collisions between outgas molecules and molecules in the ambient atmosphere (ambient-scattering). For simplicity, the geometry was idealized to a uniformly outgassing sphere and to a disk oriented normal to the freestream. The method of solution involved an integration of an approximation of the Boltzmann kinetic equation known as the BGK (or Krook) model equation. Results were obtained in the form of simple equations relating outgas return flux to spacecraft and orbit parameters. Results were compared with previous analyses based on more simplistic models of the collision processes.
Inherent limitations of probabilistic models for protein-DNA binding specificity
Ruan, Shuxiang
2017-01-01
The specificities of transcription factors are most commonly represented with probabilistic models. These models provide a probability for each base occurring at each position within the binding site and the positions are assumed to contribute independently. The model is simple and intuitive and is the basis for many motif discovery algorithms. However, the model also has inherent limitations that prevent it from accurately representing true binding probabilities, especially for the highest affinity sites under conditions of high protein concentration. The limitations are not due to the assumption of independence between positions but rather are caused by the non-linear relationship between binding affinity and binding probability and the fact that independent normalization at each position skews the site probabilities. Generally probabilistic models are reasonably good approximations, but new high-throughput methods allow for biophysical models with increased accuracy that should be used whenever possible. PMID:28686588
A method for calculating aerodynamic heating on sounding rocket tangent ogive noses.
NASA Technical Reports Server (NTRS)
Wing, L. D.
1973-01-01
A method is presented for calculating the aerodynamic heating and shear stresses at the wall for tangent ogive noses that are slender enough to maintain an attached nose shock through that portion of flight during which heat transfer from the boundary layer to the wall is significant. The lower entropy of the attached nose shock combined with the inclusion of the streamwise pressure gradient yields a reasonable estimate of the actual flow conditions. Both laminar and turbulent boundary layers are examined and an approximation of the effects of (up to) moderate angles-of-attack is included in the analysis. The analytical method has been programmed in FORTRAN IV for an IBM 360/91 computer.
A method for calculating aerodynamic heating on sounding rocket tangent ogive noses
NASA Technical Reports Server (NTRS)
Wing, L. D.
1972-01-01
A method is presented for calculating the aerodynamic heating and shear stresses at the wall for tangent ogive noses that are slender enough to maintain an attached nose shock through that portion of flight during which heat transfer from the boundary layer to the wall is significant. The lower entropy of the attached nose shock combined with the inclusion of the streamwise pressure gradient yields a reasonable estimate of the actual flow conditions. Both laminar and turbulent boundary layers are examined and an approximation of the effects of (up to) moderate angles-of-attack is included in the analysis. The analytical method has been programmed in FORTRAN 4 for an IBM 360/91 computer.
Choi, Ji-Hye; Gwak, Mi-Jin; Chung, Seo-Jin; Kim, Kwang-Ok; O'Mahony, Michael; Ishii, Rie; Bae, Ye-Won
2015-06-01
The present study cross-culturally investigated the drivers of liking for traditional and ethnic chicken marinades using descriptive analysis and consumer taste tests incorporating the check-all-that-apply (CATA) method. Seventy-three Koreans and 86 US consumers participated. The tested sauces comprised three tomato-based sauces, a teriyaki-based sauce and a Korean spicy seasoning-based sauce. Chicken breasts were marinated with each of the five barbecue sauces, grilled and served for evaluation. Descriptive analysis and consumer taste tests were conducted. Consumers rated the acceptance on a hedonic scale and checked the reasons for (dis)liking by the CATA method for each sauce. A general linear model, multiple factor analysis and chi-square analysis were conducted using the data. The results showed that the preference orders of the samples between Koreans and US consumers were strikingly similar to each other. However, the reasons for (dis)liking the samples differed cross-culturally. The drivers of liking of two sauces sharing relatively similar sensory profiles but differing significantly in hedonic ratings were effectively delineated by reasons of (dis)liking CATA results. Reasons for (dis)liking CATA proved to be a powerful supporting method to understand the internal drivers of liking which can be overlooked by generic descriptive analysis. © 2014 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Bataev, Vadim A.; Pupyshev, Vladimir I.; Godunov, Igor A.
2016-05-01
The features of nuclear motion corresponding to the rotation of the formyl group (CHO) are studied for the molecules of furfural and some other five-member heterocyclic aromatic aldehydes by the use of MP2/6-311G** quantum chemical approximation. It is demonstrated that the traditional one-dimensional models of internal rotation for the molecules studied have only limited applicability. The reason is the strong kinematic interaction of the rotation of the CHO group and out-of-plane CHO deformation that is realized for the molecules under consideration. The computational procedure based on the two-dimensional approximation is considered for low lying vibrational states as more adequate to the problem.
Approximate maximum likelihood decoding of block codes
NASA Technical Reports Server (NTRS)
Greenberger, H. J.
1979-01-01
Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.
NASA Astrophysics Data System (ADS)
Zeng, Jing; Huang, Handong; Li, Huijie; Miao, Yuxin; Wen, Junxiang; Zhou, Fei
2017-12-01
The main emphasis of exploration and development is shifting from simple structural reservoirs to complex reservoirs, which all have the characteristics of complex structure, thin reservoir thickness and large buried depth. Faced with these complex geological features, hydrocarbon detection technology is a direct indication of changes in hydrocarbon reservoirs and a good approach for delimiting the distribution of underground reservoirs. It is common to utilize the time-frequency (TF) features of seismic data in detecting hydrocarbon reservoirs. Therefore, we research the complex domain-matching pursuit (CDMP) method and propose some improvements. First is the introduction of a scale parameter, which corrects the defect that atomic waveforms only change with the frequency parameter. Its introduction not only decomposes seismic signal with high accuracy and high efficiency but also reduces iterations. We also integrate jumping search with ergodic search to improve computational efficiency while maintaining the reasonable accuracy. Then we combine the improved CDMP with the Wigner-Ville distribution to obtain a high-resolution TF spectrum. A one-dimensional modeling experiment has proved the validity of our method. Basing on the low-frequency domain reflection coefficient in fluid-saturated porous media, we finally get an approximation formula for the mobility attributes of reservoir fluid. This approximation formula is used as a hydrocarbon identification factor to predict deep-water gas-bearing sand of the M oil field in the South China Sea. The results are consistent with the actual well test results and our method can help inform the future exploration of deep-water gas reservoirs.
True or false: do 5-year-olds understand belief?
Fabricius, William V; Boyer, Ty W; Weimer, Amy A; Carroll, Kathleen
2010-11-01
In 3 studies (N = 188) we tested the hypothesis that children use a perceptual access approach to reason about mental states before they understand beliefs. The perceptual access hypothesis predicts a U-shaped developmental pattern of performance in true belief tasks, in which 3-year-olds who reason about reality should succeed, 4- to 5-year-olds who use perceptual access reasoning should fail, and older children who use belief reasoning should succeed. The results of Study 1 revealed the predicted pattern in 2 different true belief tasks. The results of Study 2 disconfirmed several alternate explanations based on possible pragmatic and inhibitory demands of the true belief tasks. In Study 3, we compared 2 methods of classifying individuals according to which 1 of the 3 reasoning strategies (reality reasoning, perceptual access reasoning, belief reasoning) they used. The 2 methods gave converging results. Both methods indicated that the majority of children used the same approach across tasks and that it was not until after 6 years of age that most children reasoned about beliefs. We conclude that because most prior studies have failed to detect young children's use of perceptual access reasoning, they have overestimated their understanding of false beliefs. We outline several theoretical implications that follow from the perceptual access hypothesis.
Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis
2017-01-01
Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley W.
2009-01-01
Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed, built, and tested at NASA Dryden Flight Research Center. The results from the full order model and the approximate reduced order model are analyzed and compared.
Fault Diagnosis Method for a Mine Hoist in the Internet of Things Environment.
Li, Juanli; Xie, Jiacheng; Yang, Zhaojian; Li, Junjie
2018-06-13
To reduce the difficulty of acquiring and transmitting data in mining hoist fault diagnosis systems and to mitigate the low efficiency and unreasonable reasoning process problems, a fault diagnosis method for mine hoisting equipment based on the Internet of Things (IoT) is proposed in this study. The IoT requires three basic architectural layers: a perception layer, network layer, and application layer. In the perception layer, we designed a collaborative acquisition system based on the ZigBee short distance wireless communication technology for key components of the mine hoisting equipment. Real-time data acquisition was achieved, and a network layer was created by using long-distance wireless General Packet Radio Service (GPRS) transmission. The transmission and reception platforms for remote data transmission were able to transmit data in real time. A fault diagnosis reasoning method is proposed based on the improved Dezert-Smarandache Theory (DSmT) evidence theory, and fault diagnosis reasoning is performed. Based on interactive technology, a humanized and visualized fault diagnosis platform is created in the application layer. The method is then verified. A fault diagnosis test of the mine hoisting mechanism shows that the proposed diagnosis method obtains complete diagnostic data, and the diagnosis results have high accuracy and reliability.
NASA Astrophysics Data System (ADS)
Grüning, M.; Gritsenko, O. V.; Baerends, E. J.
2002-04-01
An approximate Kohn-Sham (KS) exchange potential vxσCEDA is developed, based on the common energy denominator approximation (CEDA) for the static orbital Green's function, which preserves the essential structure of the density response function. vxσCEDA is an explicit functional of the occupied KS orbitals, which has the Slater vSσ and response vrespσCEDA potentials as its components. The latter exhibits the characteristic step structure with "diagonal" contributions from the orbital densities |ψiσ|2, as well as "off-diagonal" ones from the occupied-occupied orbital products ψiσψj(≠1)σ*. Comparison of the results of atomic and molecular ground-state CEDA calculations with those of the Krieger-Li-Iafrate (KLI), exact exchange (EXX), and Hartree-Fock (HF) methods show, that both KLI and CEDA potentials can be considered as very good analytical "closure approximations" to the exact KS exchange potential. The total CEDA and KLI energies nearly coincide with the EXX ones and the corresponding orbital energies ɛiσ are rather close to each other for the light atoms and small molecules considered. The CEDA, KLI, EXX-ɛiσ values provide the qualitatively correct order of ionizations and they give an estimate of VIPs comparable to that of the HF Koopmans' theorem. However, the additional off-diagonal orbital structure of vxσCEDA appears to be essential for the calculated response properties of molecular chains. KLI already considerably improves the calculated (hyper)polarizabilities of the prototype hydrogen chains Hn over local density approximation (LDA) and standard generalized gradient approximations (GGAs), while the CEDA results are definitely an improvement over the KLI ones. The reasons of this success are the specific orbital structures of the CEDA and KLI response potentials, which produce in an external field an ultranonlocal field-counteracting exchange potential.
Analyzing the errors of DFT approximations for compressed water systems
NASA Astrophysics Data System (ADS)
Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.
2014-07-01
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the clusters.
Analyzing the errors of DFT approximations for compressed water systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfè, D.; London Centre for Nanotechnology, UCL, London WC1H 0AH; Thomas Young Centre, UCL, London WC1H 0AH
We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed watermore » clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid and the clusters.« less
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
Heuristic errors in clinical reasoning.
Rylander, Melanie; Guerrasio, Jeannette
2016-08-01
Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.
Survey of HEPA filter experience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbaugh, E.H.
1982-07-01
A survey of high efficiency particulate air (HEPA) filter applications and experience at Department of Energy (DOE) sites was conducted to provide an overview of the reasons and magnitude of HEPA filter changeouts and failures. Results indicated that approximately 58% of the filters surveyed were changed out in the three year study period, and some 18% of all filters were changed out more than once. Most changeouts (63%) were due to the existence of a high pressure drop across the filter, indicative of filter plugging. Other reasons for changeout included leak-test failure (15%), preventive maintenance service life limit (13%), suspectedmore » damage (5%) and radiation buildup (4%). Filter failures occurred with approximately 12% of all installed filters. Of these failures, most (64%) occurred for unknown or unreported reasons. Handling or installation damage accounted for an additional 19% of reported failures. Media ruptures, filter-frame failures and seal failures each accounted for approximately 5 to 6% of the reported failures.« less
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
Mones, Letif; Jones, Andrew; Götz, Andreas W.; ...
2015-02-03
We present the implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis usingmore » various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies.« less
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
Mones, Letif; Jones, Andrew; Götz, Andreas W; Laino, Teodoro; Walker, Ross C; Leimkuhler, Ben; Csányi, Gábor; Bernstein, Noam
2015-01-01
The implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER are presented. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis using various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:25649827
The adaptive buffered force QM/MM method in the CP2K and AMBER software packages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mones, Letif; Jones, Andrew; Götz, Andreas W.
We present the implementation and validation of the adaptive buffered force (AdBF) quantum-mechanics/molecular-mechanics (QM/MM) method in two popular packages, CP2K and AMBER. The implementations build on the existing QM/MM functionality in each code, extending it to allow for redefinition of the QM and MM regions during the simulation and reducing QM-MM interface errors by discarding forces near the boundary according to the buffered force-mixing approach. New adaptive thermostats, needed by force-mixing methods, are also implemented. Different variants of the method are benchmarked by simulating the structure of bulk water, water autoprotolysis in the presence of zinc and dimethyl-phosphate hydrolysis usingmore » various semiempirical Hamiltonians and density functional theory as the QM model. It is shown that with suitable parameters, based on force convergence tests, the AdBF QM/MM scheme can provide an accurate approximation of the structure in the dynamical QM region matching the corresponding fully QM simulations, as well as reproducing the correct energetics in all cases. Adaptive unbuffered force-mixing and adaptive conventional QM/MM methods also provide reasonable results for some systems, but are more likely to suffer from instabilities and inaccuracies.« less
Worked Examples Leads to Better Performance in Analyzing and Solving Real-Life Decision Cases
ERIC Educational Resources Information Center
Cevik, Yasemin Demiraslan; Andre, Thomas
2012-01-01
This study compared the impact of three types of case-based methods (worked example, faded worked example, and case-based reasoning) on preservice teachers' (n=71) decision making and reasoning related to realistic classroom management situations. Participants in this study received a short-term implementation of one of these three major…
Dunford, Benjamin B; Perrigino, Matthew; Tucker, Sharon J; Gaston, Cynthia L; Young, Jim; Vermace, Beverly J; Walroth, Todd A; Buening, Natalie R; Skillman, Katherine L; Berndt, Dawn
2017-09-01
We investigated nurse perceptions of smart infusion medication pumps to provide evidence-based insights on how to help reduce work around and improve compliance with patient safety policies. Specifically, we investigated the following 3 research questions: (1) What are nurses' current attitudes about smart infusion pumps? (2) What do nurses think are the causes of smart infusion pump work arounds? and (3) To whom do nurses turn for smart infusion pump training and troubleshooting? We surveyed a large number of nurses (N = 818) in 3 U.S.-based health care systems to address the research questions above. We assessed nurses' opinions about smart infusion pumps, organizational perceptions, and the reasons for work arounds using a voluntary and anonymous Web-based survey. Using qualitative research methods, we coded open-ended responses to questions about the reasons for work arounds to organize responses into useful categories. The nurses reported widespread satisfaction with smart infusion pumps. However, they reported numerous organizational, cultural, and psychological causes of smart pump work arounds. Of 1029 open-ended responses to the question "why do smart pump work arounds occur?" approximately 44% of the causes were technology related, 47% were organization related, and 9% were related to individual factors. Finally, an overwhelming majority of nurses reported seeking solutions to smart pump problems from coworkers and being trained primarily on the job. Hospitals may significantly improve adherence to smart pump safety features by addressing the nontechnical causes of work arounds and by providing more leadership and formalized training for resolving smart pump-related problems.
Bifurcations in models of a society of reasonable contrarians and conformists
NASA Astrophysics Data System (ADS)
Bagnoli, Franco; Rechtman, Raúl
2015-10-01
We study models of a society composed of a mixture of conformist and reasonable contrarian agents that at any instant hold one of two opinions. Conformists tend to agree with the average opinion of their neighbors and reasonable contrarians tend to disagree, but revert to a conformist behavior in the presence of an overwhelming majority, in line with psychological experiments. The model is studied in the mean-field approximation and on small-world and scale-free networks. In the mean-field approximation, a large fraction of conformists triggers a polarization of the opinions, a pitchfork bifurcation, while a majority of reasonable contrarians leads to coherent oscillations, with an alternation of period-doubling and pitchfork bifurcations up to chaos. Similar scenarios are obtained by changing the fraction of long-range rewiring and the parameter of scale-free networks related to the average connectivity.
Bifurcations in models of a society of reasonable contrarians and conformists.
Bagnoli, Franco; Rechtman, Raúl
2015-10-01
We study models of a society composed of a mixture of conformist and reasonable contrarian agents that at any instant hold one of two opinions. Conformists tend to agree with the average opinion of their neighbors and reasonable contrarians tend to disagree, but revert to a conformist behavior in the presence of an overwhelming majority, in line with psychological experiments. The model is studied in the mean-field approximation and on small-world and scale-free networks. In the mean-field approximation, a large fraction of conformists triggers a polarization of the opinions, a pitchfork bifurcation, while a majority of reasonable contrarians leads to coherent oscillations, with an alternation of period-doubling and pitchfork bifurcations up to chaos. Similar scenarios are obtained by changing the fraction of long-range rewiring and the parameter of scale-free networks related to the average connectivity.
A geometric modeler based on a dual-geometry representation polyhedra and rational b-splines
NASA Technical Reports Server (NTRS)
Klosterman, A. L.
1984-01-01
For speed and data base reasons, solid geometric modeling of large complex practical systems is usually approximated by a polyhedra representation. Precise parametric surface and implicit algebraic modelers are available but it is not yet practical to model the same level of system complexity with these precise modelers. In response to this contrast the GEOMOD geometric modeling system was built so that a polyhedra abstraction of the geometry would be available for interactive modeling without losing the precise definition of the geometry. Part of the reason that polyhedra modelers are effective is that all bounded surfaces can be represented in a single canonical format (i.e., sets of planar polygons). This permits a very simple and compact data structure. Nonuniform rational B-splines are currently the best representation to describe a very large class of geometry precisely with one canonical format. The specific capabilities of the modeler are described.
Prediction of destabilizing blade tip forces for shrouded and unshrouded turbines
NASA Technical Reports Server (NTRS)
Qiu, Y. J.; Martinezsanchez, M.
1985-01-01
The effect of a nonuniform flow field on the Alford force calculation is investigated. The ideas used here are based on those developed by Horlock and Greitzer. It is shown that the nonuniformity of the flow field does contribute to the Alford force calculation. An attempt is also made to include the effect of whirl speed. The values predicted by the model are compared with those obtained experimentally by Urlicks and Wohlrab. The possibility of using existing turbine tip loss correlations to predict beta is also exploited. The nonuniform flow field induced by the tip clearnance variation tends to increase the resultant destabilizing force over and above what would be predicted on the basis of the local variation of efficiency. On the one hand, the pressure force due to the nonuniform inlet and exit pressure also plays a part even for unshrouded blades, and this counteracts the flow field effects, so that the simple Alford prediction remains a reasonable approximation. Once the efficiency variation with clearance is known, the presented model gives a slightly overpredicted, but reasonably accurate destabilizing force. In the absence of efficiency vs. clearance data, an empirical tip loss coefficient can be used to give a reasonable prediction of destabilizing force. To a first approximation, the whirl does have a damping effect, but only of small magnitude, and thus it can be ignored for some purposes.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-16
... under Export Control Classification Number (``ECCN'') 0A982, controlled for Crime Control reasons, and..., classified under ECCN 0A982, controlled for Crime Control reasons, and valued at approximately $112, from the... kit, items classified under ECCN 0A982, controlled for Crime Control reasons, and valued at...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacvarov, D.C.
1981-01-01
A new method for probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes is presented. The utilized approach of applying the finite element method for probabilistic risk assessment is demonstrated to be very powerful. The reasons for this are two. First, the finite element method is inherently suitable for analysis of three dimensional spaces where the parameters, such as three variate probability densities of the lightning currents, are non-uniformly distributed. Second, the finite element method permits non-uniform discretization of the three dimensional probability spaces thus yielding high accuracy in critical regions, such as the area of themore » low probability events, while at the same time maintaining coarse discretization in the non-critical areas to keep the number of grid points and the size of the problem to a manageable low level. The finite element probabilistic risk assessment method presented here is based on a new multidimensional search algorithm. It utilizes an efficient iterative technique for finite element interpolation of the transmission line insulation flashover criteria computed with an electro-magnetic transients program. Compared to other available methods the new finite element probabilistic risk assessment method is significantly more accurate and approximately two orders of magnitude computationally more efficient. The method is especially suited for accurate assessment of rare, very low probability events.« less
ERIC Educational Resources Information Center
Hedeker, Donald; And Others
1996-01-01
Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example, M. Fishbein and I. Ajzen's theory of reasoned action is examined. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate individual influences…
Proportional reasoning as a heuristic-based process: time constraint and dual task considerations.
Gillard, Ellen; Van Dooren, Wim; Schaeken, Walter; Verschaffel, Lieven
2009-01-01
The present study interprets the overuse of proportional solution methods from a dual process framework. Dual process theories claim that analytic operations involve time-consuming executive processing, whereas heuristic operations are fast and automatic. In two experiments to test whether proportional reasoning is heuristic-based, the participants solved "proportional" problems, for which proportional solution methods provide correct answers, and "nonproportional" problems known to elicit incorrect answers based on the assumption of proportionality. In Experiment 1, the available solution time was restricted. In Experiment 2, the executive resources were burdened with a secondary task. Both manipulations induced an increase in proportional answers and a decrease in correct answers to nonproportional problems. These results support the hypothesis that the choice for proportional methods is heuristic-based.
Notter, Dominic A
2015-09-01
Particulate matter (PM) causes severe damage to human health globally. Airborne PM is a mixture of solid and liquid droplets suspended in air. It consists of organic and inorganic components, and the particles of concern range in size from a few nanometers to approximately 10μm. The complexity of PM is considered to be the reason for the poor understanding of PM and may also be the reason why PM in environmental impact assessment is poorly defined. Currently, life cycle impact assessment is unable to differentiate highly toxic soot particles from relatively harmless sea salt. The aim of this article is to present a new impact assessment for PM where the impact of PM is modeled based on particle physico-chemical properties. With the new method, 2781 characterization factors that account for particle mass, particle number concentration, particle size, chemical composition and solubility were calculated. Because particle sizes vary over four orders of magnitudes, a sound assessment of PM requires that the exposure model includes deposition of particles in the lungs and that the fate model includes coagulation as a removal mechanism for ultrafine particles. The effects model combines effects from particle size, solubility and chemical composition. The first results from case studies suggest that PM that stems from emissions generally assumed to be highly toxic (e.g. biomass combustion and fossil fuel combustion) might lead to results that are similar compared with an assessment of PM using established methods. However, if harmless PM emissions are emitted, established methods enormously overestimate the damage. The new impact assessment allows a high resolution of the damage allocatable to different size fractions or chemical components. This feature supports a more efficient optimization of processes and products when combating air pollution. Copyright © 2015 Elsevier Ltd. All rights reserved.
Silvestrelli, Pier Luigi; Ambrosetti, Alberto
2014-03-28
The Density Functional Theory (DFT)/van der Waals-Quantum Harmonic Oscillator-Wannier function (vdW-QHO-WF) method, recently developed to include the vdW interactions in approximated DFT by combining the quantum harmonic oscillator model with the maximally localized Wannier function technique, is applied to the cases of atoms and small molecules (X=Ar, CO, H2, H2O) weakly interacting with benzene and with the ideal planar graphene surface. Comparison is also presented with the results obtained by other DFT vdW-corrected schemes, including PBE+D, vdW-DF, vdW-DF2, rVV10, and by the simpler Local Density Approximation (LDA) and semilocal generalized gradient approximation approaches. While for the X-benzene systems all the considered vdW-corrected schemes perform reasonably well, it turns out that an accurate description of the X-graphene interaction requires a proper treatment of many-body contributions and of short-range screening effects, as demonstrated by adopting an improved version of the DFT/vdW-QHO-WF method. We also comment on the widespread attitude of relying on LDA to get a rough description of weakly interacting systems.
Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Groves, Curtis; Ilie, Marcel; Schallhorn, Paul
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature
Poisson Approximation-Based Score Test for Detecting Association of Rare Variants.
Fang, Hongyan; Zhang, Hong; Yang, Yaning
2016-07-01
Genome-wide association study (GWAS) has achieved great success in identifying genetic variants, but the nature of GWAS has determined its inherent limitations. Under the common disease rare variants (CDRV) hypothesis, the traditional association analysis methods commonly used in GWAS for common variants do not have enough power for detecting rare variants with a limited sample size. As a solution to this problem, pooling rare variants by their functions provides an efficient way for identifying susceptible genes. Rare variant typically have low frequencies of minor alleles, and the distribution of the total number of minor alleles of the rare variants can be approximated by a Poisson distribution. Based on this fact, we propose a new test method, the Poisson Approximation-based Score Test (PAST), for association analysis of rare variants. Two testing methods, namely, ePAST and mPAST, are proposed based on different strategies of pooling rare variants. Simulation results and application to the CRESCENDO cohort data show that our methods are more powerful than the existing methods. © 2016 John Wiley & Sons Ltd/University College London.
Robust kernel collaborative representation for face recognition
NASA Astrophysics Data System (ADS)
Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong
2015-05-01
One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.
Williams, Rebecca J; Tse, Tony; DiPiazza, Katelyn; Zarin, Deborah A
2015-01-01
Clinical trials that end prematurely (or "terminate") raise financial, ethical, and scientific concerns. The extent to which the results of such trials are disseminated and the reasons for termination have not been well characterized. A cross-sectional, descriptive study of terminated clinical trials posted on the ClinicalTrials.gov results database as of February 2013 was conducted. The main outcomes were to characterize the availability of primary outcome data on ClinicalTrials.gov and in the published literature and to identify the reasons for trial termination. Approximately 12% of trials with results posted on the ClinicalTrials.gov results database (905/7,646) were terminated. Most trials were terminated for reasons other than accumulated data from the trial (68%; 619/905), with an insufficient rate of accrual being the lead reason for termination among these trials (57%; 350/619). Of the remaining trials, 21% (193/905) were terminated based on data from the trial (findings of efficacy or toxicity) and 10% (93/905) did not specify a reason. Overall, data for a primary outcome measure were available on ClinicalTrials.gov and in the published literature for 72% (648/905) and 22% (198/905) of trials, respectively. Primary outcome data were reported on the ClinicalTrials.gov results database and in the published literature more frequently (91% and 46%, respectively) when the decision to terminate was based on data from the trial. Trials terminate for a variety of reasons, not all of which reflect failures in the process or an inability to achieve the intended goals. Primary outcome data were reported most often when termination was based on data from the trial. Further research is needed to identify best practices for disseminating the experience and data resulting from terminated trials in order to help ensure maximal societal benefit from the investments of trial participants and others involved with the study.
NASA Astrophysics Data System (ADS)
Rao, T. R. Ramesh
2018-04-01
In this paper, we study the analytical method based on reduced differential transform method coupled with sumudu transform through Pades approximants. The proposed method may be considered as alternative approach for finding exact solution of Gas dynamics equation in an effective manner. This method does not require any discretization, linearization and perturbation.
Compressed Sensing Quantum Process Tomography for Superconducting Quantum Gates
NASA Astrophysics Data System (ADS)
Rodionov, Andrey
An important challenge in quantum information science and quantum computing is the experimental realization of high-fidelity quantum operations on multi-qubit systems. Quantum process tomography (QPT) is a procedure devised to fully characterize a quantum operation. We first present the results of the estimation of the process matrix for superconducting multi-qubit quantum gates using the full data set employing various methods: linear inversion, maximum likelihood, and least-squares. To alleviate the problem of exponential resource scaling needed to characterize a multi-qubit system, we next investigate a compressed sensing (CS) method for QPT of two-qubit and three-qubit quantum gates. Using experimental data for two-qubit controlled-Z gates, taken with both Xmon and superconducting phase qubits, we obtain estimates for the process matrices with reasonably high fidelities compared to full QPT, despite using significantly reduced sets of initial states and measurement configurations. We show that the CS method still works when the amount of data is so small that the standard QPT would have an underdetermined system of equations. We also apply the CS method to the analysis of the three-qubit Toffoli gate with simulated noise, and similarly show that the method works well for a substantially reduced set of data. For the CS calculations we use two different bases in which the process matrix is approximately sparse (the Pauli-error basis and the singular value decomposition basis), and show that the resulting estimates of the process matrices match with reasonably high fidelity. For both two-qubit and three-qubit gates, we characterize the quantum process by its process matrix and average state fidelity, as well as by the corresponding standard deviation defined via the variation of the state fidelity for different initial states. We calculate the standard deviation of the average state fidelity both analytically and numerically, using a Monte Carlo method. Overall, we show that CS QPT offers a significant reduction in the needed amount of experimental data for two-qubit and three-qubit quantum gates.
Formal Consistency Verification of Deliberative Agents with Respect to Communication Protocols
NASA Technical Reports Server (NTRS)
Ramirez, Jaime; deAntonio, Angelica
2004-01-01
The aim of this paper is to show a method that is able to detect inconsistencies in the reasoning carried out by a deliberative agent. The agent is supposed to be provided with a hybrid Knowledge Base expressed in a language called CCR-2, based on production rules and hierarchies of frames, which permits the representation of non-monotonic reasoning, uncertain reasoning and arithmetic constraints in the rules. The method can give a specification of the scenarios in which the agent would deduce an inconsistency. We define a scenario to be a description of the initial agent s state (in the agent life cycle), a deductive tree of rule firings, and a partially ordered set of messages and/or stimuli that the agent must receive from other agents and/or the environment. Moreover, the method will make sure that the scenarios will be valid w.r.t. the communication protocols in which the agent is involved.
Towards a multiconfigurational method of increments
NASA Astrophysics Data System (ADS)
Fertitta, E.; Koch, D.; Paulus, B.; Barcza, G.; Legeza, Ö.
2018-06-01
The method of increments (MoI) allows one to successfully calculate cohesive energies of bulk materials with high accuracy, but it encounters difficulties when calculating dissociation curves. The reason is that its standard formalism is based on a single Hartree-Fock (HF) configuration whose orbitals are localised and used for the many-body expansion. In situations where HF does not allow a size-consistent description of the dissociation, the MoI cannot be guaranteed to yield proper results either. Herein, we address the problem by employing a size-consistent multiconfigurational reference for the MoI formalism. This leads to a matrix equation where a coupling derived by the reference itself is employed. In principle, such an approach allows one to evaluate approximate values for the ground as well as excited states energies. While the latter are accurate close to the avoided crossing only, the ground state results are very promising for the whole dissociation curve, as shown by the comparison with density matrix renormalisation group benchmarks. We tested this two-state constant-coupling MoI on beryllium rings of different sizes and studied the error introduced by the constant coupling.
Intelligent Gearbox Diagnosis Methods Based on SVM, Wavelet Lifting and RBR
Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng
2010-01-01
Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis. PMID:22399894
Intelligent gearbox diagnosis methods based on SVM, wavelet lifting and RBR.
Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng
2010-01-01
Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis.
Fast Image Texture Classification Using Decision Trees
NASA Technical Reports Server (NTRS)
Thompson, David R.
2011-01-01
Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.
Application of Seismic Array Processing to Tsunami Early Warning
NASA Astrophysics Data System (ADS)
An, C.; Meng, L.
2015-12-01
Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800 instruments) and the Earthscope USArray Transportable Array (~400 instruments), are established.
Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints.
Tang, Nian-Sheng; Yu, Bin; Tang, Man-Lai
2014-12-18
A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.
Gilgien, Matthias; Spörri, Jörg; Chardonnens, Julien; Kröll, Josef; Limpach, Philippe; Müller, Erich
2015-01-01
In the sport of alpine skiing, knowledge about the centre of mass (CoM) kinematics (i.e. position, velocity and acceleration) is essential to better understand both performance and injury. This study proposes a global navigation satellite system (GNSS)-based method to measure CoM kinematics without restriction of capture volume and with reasonable set-up and processing requirements. It combines the GNSS antenna position, terrain data and the accelerations acting on the skier in order to approximate the CoM location, velocity and acceleration. The validity of the method was assessed against a reference system (video-based 3D kinematics) over 12 turn cycles on a giant slalom skiing course. The mean (± s) position, velocity and acceleration differences between the CoM obtained from the GNSS and the reference system were 9 ± 12 cm, 0.08 ± 0.19 m · s(-1) and 0.22 ± 1.28 m · s(-2), respectively. The velocity and acceleration differences obtained were smaller than typical differences between the measures of several skiers on the same course observed in the literature, while the position differences were slightly larger than its discriminative meaningful change. The proposed method can therefore be interpreted to be technically valid and adequate for a variety of biomechanical research questions in the field of alpine skiing with certain limitations regarding position.
Reasoning and Data Representation in a Health and Lifestyle Support System.
Hanke, Sten; Kreiner, Karl; Kropf, Johannes; Scase, Marc; Gossy, Christian
2017-01-01
Case-based reasoning and data interpretation is an artificial intelligence approach that capitalizes on past experience to solve current problems and this can be used as a method for practical intelligent systems. Case-based data reasoning is able to provide decision support for experts and clinicians in health systems as well as lifestyle systems. In this project we were focusing on developing a solution for healthy ageing considering daily activities, nutrition as well as cognitive activities. The data analysis of the reasoner followed state of the art guidelines from clinical practice. Guidelines provide a general framework to guide clinicians, and require consequent background knowledge to become operational, which is precisely the kind of information recorded in practice cases; cases complement guidelines very well and helps to interpret them. It is expected that the interest in case-based reasoning systems in the health.
Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji
2016-12-01
Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
ERIC Educational Resources Information Center
Jaeger, Martin; Adair, Desmond
2015-01-01
The purpose of this study is to analyse the feasibility of an evidential reasoning (ER) method for portfolio assessments and comparison of the results found with those based on a traditional holistic judgement. An ER approach has been incorporated into portfolio assessment of an undergraduate engineering design course delivered as a project-based…
An Analysis of Categorical and Quantitative Methods for Planning Under Uncertainty
Langlotz, Curtis P.; Shortliffe, Edward H.
1988-01-01
Decision theory and logical reasoning are both methods for representing and solving medical decision problems. We analyze the usefulness of these two approaches to medical therapy planning by establishing a simple correspondence between decision theory and non-monotonic logic, a formalization of categorical logical reasoning. The analysis indicates that categorical approaches to planning can be viewed as comprising two decision-theoretic concepts: probabilities (degrees of belief in planning hypotheses) and utilities (degrees of desirability of planning outcomes). We present and discuss examples of the following lessons from this decision-theoretic view of categorical (nonmonotonic) reasoning: (1) Decision theory and artificial intelligence techniques are intended to solve different components of the planning problem. (2) When considered in the context of planning under uncertainty, nonmonotonic logics do not retain the domain-independent characteristics of classical logical reasoning for planning under certainty. (3) Because certain nonmonotonic programming paradigms (e.g., frame-based inheritance, rule-based planning, protocol-based reminders) are inherently problem-specific, they may be inappropriate to employ in the solution of certain types of planning problems. We discuss how these conclusions affect several current medical informatics research issues, including the construction of “very large” medical knowledge bases.
MULTIPROCESSOR AND DISTRIBUTED PROCESSING BIBLIOGRAPHIC DATA BASE SOFTWARE SYSTEM
NASA Technical Reports Server (NTRS)
Miya, E. N.
1994-01-01
Multiprocessors and distributed processing are undergoing increased scientific scrutiny for many reasons. It is more and more difficult to keep track of the existing research in these fields. This package consists of a large machine-readable bibliographic data base which, in addition to the usual keyword searches, can be used for producing citations, indexes, and cross-references. The data base is compiled from smaller existing multiprocessing bibliographies, and tables of contents from journals and significant conferences. There are approximately 4,000 entries covering topics such as parallel and vector processing, networks, supercomputers, fault-tolerant computers, and cellular automata. Each entry is represented by 21 fields including keywords, author, referencing book or journal title, volume and page number, and date and city of publication. The data base contains UNIX 'refer' formatted ASCII data and can be implemented on any computer running under the UNIX operating system. The data base requires approximately one megabyte of secondary storage. The documentation for this program is included with the distribution tape, although it can be purchased for the price below. This bibliography was compiled in 1985 and updated in 1988.
Nwe, Kido; Xu, Heng; Regino, Celeste Aida S.; Bernardo, Marcelino; Ileva, Lilia; Riffle, Lisa; Wong, Karen J.; Brechbiel, Martin W.
2009-01-01
In this paper we report a new method to prepare and characterize a contrast agent based on a fourth-generation (G4) polyamidoamine (PAMAM) dendrimer conjugated to the gadolinium complex of the bifunctional diethylenetriamine pentaacetic acid derivative (1B4M-DTPA). The method involves pre-forming the metal-ligand chelate in alcohol prior to conjugation to the dendrimer. The dendrimer-based agent was purified by a Sephadex® G-25 column and characterized by elemental analysis. The analysis and SEHPLC data gave a chelate to dendrimer ratio of 30:1 suggesting conjugation at approximately every other amine terminal on the dendrimer. Molar relaxivity of the agent measured at pH 7.4 displayed a higher value than that of the analogous G4 dendrimer based agent prepared by the post-metal incorporation method (r1 = 26.9 vs. 13.9 mM-1s-1 at 3T and 22°C). This is hypothesized to be due to the higher hydrophobicity of this conjugate, and the lack of available charged carboxylate groups from non-complexed free ligands that might coordinate to the metal and thus also reduce water exchange sites. Additionally, the distribution populations of compounds that result from the post-metal incorporation route are eliminated from the current product simplifying characterization as quality control issues pertaining to the production of such agents for clinical use as MR contrast agents. In vivo imaging in mice showed a reasonably fast clearance (t1/2 = 24 min) suggesting a viable agent for use in clinical application. PMID:19555072
Nwe, Kido; Xu, Heng; Regino, Celeste Aida S; Bernardo, Marcelino; Ileva, Lilia; Riffle, Lisa; Wong, Karen J; Brechbiel, Martin W
2009-07-01
In this paper, we report a new method to prepare and characterize a contrast agent based on a fourth-generation (G4) polyamidoamine (PAMAM) dendrimer conjugated to the gadolinium complex of the bifunctional diethylenetriamine pentaacetic acid derivative (1B4M-DTPA). The method involves preforming the metal-ligand chelate in alcohol prior to conjugation to the dendrimer. The dendrimer-based agent was purified by a Sephadex G-25 column and characterized by elemental analysis. The analysis and SE-HPLC data gave a chelate to dendrimer ratio of 30:1 suggesting conjugation at approximately every other amine terminal on the dendrimer. Molar relaxivity of the agent measured at pH 7.4 displayed a higher value than that of the analogous G4 dendrimer based agent prepared by the postmetal incorporation method (r(1) = 26.9 vs 13.9 mM(-1) s(-1) at 3 T and 22 degrees C). This is hypothesized to be due to the higher hydrophobicity of this conjugate and the lack of available charged carboxylate groups from noncomplexed free ligands that might coordinate to the metal and thus also reduce water exchange sites. Additionally, the distribution populations of compounds that result from the postmetal incorporation route are eliminated from the current product simplifying characterization as quality control issues pertaining to the production of such agents for clinical use as MR contrast agents. In vivo imaging in mice showed a reasonably fast clearance (t(1/2) = 24 min) suggesting a viable agent for use in clinical application.
A Fokker-Planck based kinetic model for diatomic rarefied gas flows
NASA Astrophysics Data System (ADS)
Gorji, M. Hossein; Jenny, Patrick
2013-06-01
A Fokker-Planck based kinetic model is presented here, which also accounts for internal energy modes characteristic for diatomic gas molecules. The model is based on a Fokker-Planck approximation of the Boltzmann equation for monatomic molecules, whereas phenomenological principles were employed for the derivation. It is shown that the model honors the equipartition theorem in equilibrium and fulfills the Landau-Teller relaxation equations for internal degrees of freedom. The objective behind this approximate kinetic model is accuracy at reasonably low computational cost. This can be achieved due to the fact that the resulting stochastic differential equations are continuous in time; therefore, no collisions between the simulated particles have to be calculated. Besides, because of the devised energy conserving time integration scheme, it is not required to resolve the collisional scales, i.e., the mean collision time and the mean free path of molecules. This, of course, gives rise to much more efficient simulations with respect to other particle methods, especially the conventional direct simulation Monte Carlo (DSMC), for small and moderate Knudsen numbers. To examine the new approach, first the computational cost of the model was compared with respect to DSMC, where significant speed up could be obtained for small Knudsen numbers. Second, the structure of a high Mach shock (in nitrogen) was studied, and the good performance of the model for such out of equilibrium conditions could be demonstrated. At last, a hypersonic flow of nitrogen over a wedge was studied, where good agreement with respect to DSMC (with level to level transition model) for vibrational and translational temperatures is shown.
Wu, Qiong; van Velthoven, Michelle H.M.M.T.; Chen, Li; Car, Josip; Rudan, Diana; Saftić, Vanja; Zhang, Yanfeng; Li, Ye; Scherpbier, Robert W.
2013-01-01
Aim To develop affordable, appropriate, and nutritious recipes based on local food resources and dietary practices that have the potential to improve infant feeding practices. Methods We carried out a mixed methods study following the World Health Organization’s evaluation guidelines on the promotion of child feeding. We recruited caregivers with children aged 6-23 months in Wuyi County, Hebei Province, China. The study included a 24-hour dietary recall survey, local food market survey, and development of a key local food list, food combinations, and recipes. Mothers tested selected recipes at their homes for two weeks. We interviewed mothers to obtain their perceptions on the recipes. Results The 24-hour dietary recall survey included 110 mothers. Dietary diversity was poor; approximately 10% of children consumed meat and only 2% consumed vitamin A-rich vegetables. The main reason for not giving meat was the mothers’ belief that their children could not chew and digest meat. With the help of mothers, we developed six improved nutritious recipes with locally available and affordable foods. Overall, mothers liked the recipes and were willing to continue using them. Conclusions This is the first study using a systematic evidence-based method to develop infant complementary recipes that can address complementary feeding problems in China. We developed recipes based on local foods and preparation practices and identified the barriers that mothers faced toward feeding their children with nutritious food. To improve nutrition practices, it is important to both give mothers correct feeding knowledge and assist them in cooking nutritious foods for their children based on locally available products. Further research is needed to assess long-term effects of those recipes on the nutritional status of children. PMID:23630143
Higher-level fusion for military operations based on abductive inference: proof of principle
NASA Astrophysics Data System (ADS)
Pantaleev, Aleksandar V.; Josephson, John
2006-04-01
The ability of contemporary military commanders to estimate and understand complicated situations already suffers from information overload, and the situation can only grow worse. We describe a prototype application that uses abductive inferencing to fuse information from multiple sensors to evaluate the evidence for higher-level hypotheses that are close to the levels of abstraction needed for decision making (approximately JDL levels 2 and 3). Abductive inference (abduction, inference to the best explanation) is a pattern of reasoning that occurs naturally in diverse settings such as medical diagnosis, criminal investigations, scientific theory formation, and military intelligence analysis. Because abduction is part of common-sense reasoning, implementations of it can produce reasoning traces that are very human understandable. Automated abductive inferencing can be deployed to augment human reasoning, taking advantage of computation to process large amounts of information, and to bypass limits to human attention and short-term memory. We illustrate the workings of the prototype system by describing an example of its use for small-unit military operations in an urban setting. Knowledge was encoded as it might be captured prior to engagement from a standard military decision making process (MDMP) and analysis of commander's priority intelligence requirements (PIR). The system is able to reasonably estimate the evidence for higher-level hypotheses based on information from multiple sensors. Its inference processes can be examined closely to verify correctness. Decision makers can override conclusions at any level and changes will propagate appropriately.
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
NASA Astrophysics Data System (ADS)
Vinding, Mads S.; Maximov, Ivan I.; Tošner, Zdeněk; Nielsen, Niels Chr.
2012-08-01
The use of increasingly strong magnetic fields in magnetic resonance imaging (MRI) improves sensitivity, susceptibility contrast, and spatial or spectral resolution for functional and localized spectroscopic imaging applications. However, along with these benefits come the challenges of increasing static field (B0) and rf field (B1) inhomogeneities induced by radial field susceptibility differences and poorer dielectric properties of objects in the scanner. Increasing fields also impose the need for rf irradiation at higher frequencies which may lead to elevated patient energy absorption, eventually posing a safety risk. These reasons have motivated the use of multidimensional rf pulses and parallel rf transmission, and their combination with tailoring of rf pulses for fast and low-power rf performance. For the latter application, analytical and approximate solutions are well-established in linear regimes, however, with increasing nonlinearities and constraints on the rf pulses, numerical iterative methods become attractive. Among such procedures, optimal control methods have recently demonstrated great potential. Here, we present a Krotov-based optimal control approach which as compared to earlier approaches provides very fast, monotonic convergence even without educated initial guesses. This is essential for in vivo MRI applications. The method is compared to a second-order gradient ascent method relying on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method, and a hybrid scheme Krotov-BFGS is also introduced in this study. These optimal control approaches are demonstrated by the design of a 2D spatial selective rf pulse exciting the letters "JCP" in a water phantom.
Global optimization method based on ray tracing to achieve optimum figure error compensation
NASA Astrophysics Data System (ADS)
Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin
2017-02-01
Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.
NASA Technical Reports Server (NTRS)
Petty, Grant W.
1990-01-01
A reasonably rigorous basis for understanding and extracting the physical information content of Special Sensor Microwave/Imager (SSM/I) satellite images of the marine environment is provided. To this end, a comprehensive algebraic parameterization is developed for the response of the SSM/I to a set of nine atmospheric and ocean surface parameters. The brightness temperature model includes a closed-form approximation to microwave radiative transfer in a non-scattering atmosphere and fitted models for surface emission and scattering based on geometric optics calculations for the roughened sea surface. The combined model is empirically tuned using suitable sets of SSM/I data and coincident surface observations. The brightness temperature model is then used to examine the sensitivity of the SSM/I to realistic variations in the scene being observed and to evaluate the theoretical maximum precision of global SSM/I retrievals of integrated water vapor, integrated cloud liquid water, and surface wind speed. A general minimum-variance method for optimally retrieving geophysical parameters from multichannel brightness temperature measurements is outlined, and several global statistical constraints of the type required by this method are computed. Finally, a unified set of efficient statistical and semi-physical algorithms is presented for obtaining fields of surface wind speed, integrated water vapor, cloud liquid water, and precipitation from SSM/I brightness temperature data. Features include: a semi-physical method for retrieving integrated cloud liquid water at 15 km resolution and with rms errors as small as approximately 0.02 kg/sq m; a 3-channel statistical algorithm for integrated water vapor which was constructed so as to have improved linear response to water vapor and reduced sensitivity to precipitation; and two complementary indices of precipitation activity (based on 37 GHz attenuation and 85 GHz scattering, respectively), each of which are relatively insensitive to variations in other environmental parameters.
New approach to CT pixel-based photon dose calculations in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, J.W.; Henkelman, R.M.
The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less
NASA Technical Reports Server (NTRS)
Bennett, Floyd V.; Yntema, Robert T.
1959-01-01
Several approximate procedures for calculating the bending-moment response of flexible airplanes to continuous isotropic turbulence are presented and evaluated. The modal methods (the mode-displacement and force-summation methods) and a matrix method (segmented-wing method) are considered. These approximate procedures are applied to a simplified airplane for which an exact solution to the equation of motion can be obtained. The simplified airplane consists of a uniform beam with a concentrated fuselage mass at the center. Airplane motions are limited to vertical rigid-body translation and symmetrical wing bending deflections. Output power spectra of wing bending moments based on the exact transfer-function solutions are used as a basis for the evaluation of the approximate methods. It is shown that the force-summation and the matrix methods give satisfactory accuracy and that the mode-displacement method gives unsatisfactory accuracy.
Jiang, Wen; Cao, Ying; Yang, Lin; He, Zichang
2017-08-28
Specific emitter identification plays an important role in contemporary military affairs. However, most of the existing specific emitter identification methods haven't taken into account the processing of uncertain information. Therefore, this paper proposes a time-space domain information fusion method based on Dempster-Shafer evidence theory, which has the ability to deal with uncertain information in the process of specific emitter identification. In this paper, radars will generate a group of evidence respectively based on the information they obtained, and our main task is to fuse the multiple groups of evidence to get a reasonable result. Within the framework of recursive centralized fusion model, the proposed method incorporates a correlation coefficient, which measures the relevance between evidence and a quantum mechanical approach, which is based on the parameters of radar itself. The simulation results of an illustrative example demonstrate that the proposed method can effectively deal with uncertain information and get a reasonable recognition result.
An analytical method of estimating turbine performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1949-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.
Approximation of reliability of direct genomic breeding values
USDA-ARS?s Scientific Manuscript database
Two methods to efficiently approximate theoretical genomic reliabilities are presented. The first method is based on the direct inverse of the left hand side (LHS) of mixed model equations. It uses the genomic relationship matrix for a small subset of individuals with the highest genomic relationshi...
Methods to approximate reliabilities in single-step genomic evaluation
USDA-ARS?s Scientific Manuscript database
Reliability of predictions from single-step genomic BLUP (ssGBLUP) can be calculated by inversion, but that is not feasible for large data sets. Two methods of approximating reliability were developed based on decomposition of a function of reliability into contributions from records, pedigrees, and...
A consensus algorithm for approximate string matching and its application to QRS complex detection
NASA Astrophysics Data System (ADS)
Alba, Alfonso; Mendez, Martin O.; Rubio-Rincon, Miguel E.; Arce-Santana, Edgar R.
2016-08-01
In this paper, a novel algorithm for approximate string matching (ASM) is proposed. The novelty resides in the fact that, unlike most other methods, the proposed algorithm is not based on the Hamming or Levenshtein distances, but instead computes a score for each symbol in the search text based on a consensus measure. Those symbols with sufficiently high scores will likely correspond to approximate instances of the pattern string. To demonstrate the usefulness of the proposed method, it has been applied to the detection of QRS complexes in electrocardiographic signals with competitive results when compared against the classic Pan-Tompkins (PT) algorithm. The proposed method outperformed PT in 72% of the test cases, with no extra computational cost.
Lee, Jung Ah; Lee, Sungkyu; Cho, Hong-Jun
2017-01-01
Introduction: The prevalence of adolescent electronic cigarette (e-cigarette) use has increased in most countries. This study aims to determine the relation between the frequency of e-cigarette use and the frequency and intensity of cigarette smoking. Additionally, the study evaluates the association between the reasons for e-cigarette use and the frequency of its use. Materials and Methods: Using the 2015 Korean Youth Risk Behavior Web-Based Survey, we included 6655 adolescents with an experience of e-cigarette use who were middle and high school students aged 13–18 years. We compared smoking experience, the frequency and intensity of cigarette smoking, and the relation between the reasons for e-cigarette uses and the frequency of e-cigarette use. Results: The prevalence of e-cigarette ever and current (past 30 days) users were 10.1% and 3.9%, respectively. Of the ever users, approximately 60% used e-cigarettes not within 1 month. On the other hand, 8.1% used e-cigarettes daily. The frequent and intensive cigarette smoking was associated with frequent e-cigarette uses. The percentage of frequent e-cigarette users (≥10 days/month) was 3.5% in adolescents who did not smoke within a month, but 28.7% among daily smokers. Additionally, it was 9.1% in smokers who smoked less than 1 cigarette/month, but 55.1% in smokers who smoked ≥20 cigarettes/day. The most common reason for e-cigarette use was curiosity (22.9%), followed by the belief that they are less harmful than conventional cigarettes (18.9%), the desire to quit smoking (13.1%), and the capacity for indoor use (10.7%). Curiosity was the most common reason among less frequent e-cigarette users; however, the desire to quit smoking and the capacity for indoor use were the most common reasons among more frequent users. Conclusions: Results showed a positive relation between frequency or intensity of conventional cigarette smoking and the frequency of e-cigarette use among Korean adolescents, and frequency of e-cigarette use differed according to the reason for the use of e-cigarettes. PMID:28335449
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2016-10-01
Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.
NASA Technical Reports Server (NTRS)
Keyes, David E.; Smooke, Mitchell D.
1987-01-01
A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.
An Algorithm Using Twelve Properties of Antibiotics to Find the Recommended Antibiotics, as in CPGs.
Tsopra, R; Venot, A; Duclos, C
2014-01-01
Clinical Decision Support Systems (CDSS) incorporating justifications, updating and adjustable recommendations can considerably improve the quality of healthcare. We propose a new approach to the design of CDSS for empiric antibiotic prescription, based on implementation of the deeper medical reasoning used by experts in the development of clinical practice guidelines (CPGs), to deduce the recommended antibiotics. We investigated two methods ("exclusion" versus "scoring") for reproducing this reasoning based on antibiotic properties. The "exclusion" method reproduced expert reasoning the more accurately, retrieving the full list of recommended antibiotics for almost all clinical situations. This approach has several advantages: (i) it provides convincing explanations for physicians; (ii) updating could easily be incorporated into the CDSS; (iii) it can provide recommendations for clinical situations missing from CPGs.
Hoffman, Caroline S; Messer, Lynne C; Mendola, Pauline; Savitz, David A; Herring, Amy H; Hartmann, Katherine E
2008-11-01
Reported last menstrual period (LMP) is commonly used to estimate gestational age (GA) but may be unreliable. Ultrasound in the first trimester is generally considered a highly accurate method of pregnancy dating. The authors compared first trimester report of LMP and first trimester ultrasound for estimating GA at birth and examined whether disagreement between estimates varied by maternal and infant characteristics. Analyses included 1867 singleton livebirths to women enrolled in a prospective pregnancy cohort. The authors computed the difference between LMP and ultrasound GA estimates (GA difference) and examined the proportion of births within categories of GA difference stratified by maternal and infant characteristics. The proportion of births classified as preterm, term and post-term by pregnancy dating methods was also examined. LMP-based estimates were 0.8 days (standard deviation = 8.0, median = 0) longer on average than ultrasound estimates. LMP classified more births as post-term than ultrasound (4.0% vs. 0.7%). GA difference was greater among young women, non-Hispanic Black and Hispanic women, women of non-optimal body weight and mothers of low-birthweight infants. Results indicate first trimester report of LMP reasonably approximates gestational age obtained from first trimester ultrasound, but the degree of discrepancy between estimates varies by important maternal characteristics.
Bataev, Vadim A; Pupyshev, Vladimir I; Godunov, Igor A
2016-05-15
The features of nuclear motion corresponding to the rotation of the formyl group (CHO) are studied for the molecules of furfural and some other five-member heterocyclic aromatic aldehydes by the use of MP2/6-311G** quantum chemical approximation. It is demonstrated that the traditional one-dimensional models of internal rotation for the molecules studied have only limited applicability. The reason is the strong kinematic interaction of the rotation of the CHO group and out-of-plane CHO deformation that is realized for the molecules under consideration. The computational procedure based on the two-dimensional approximation is considered for low lying vibrational states as more adequate to the problem. Copyright © 2016 Elsevier B.V. All rights reserved.
Viscous Rayleigh-Taylor instability in spherical geometry
NASA Astrophysics Data System (ADS)
Mikaelian, Karnig O.
2016-02-01
We consider viscous fluids in spherical geometry, a lighter fluid supporting a heavier one. Chandrasekhar [Q. J. Mech. Appl. Math. 8, 1 (1955), 10.1093/qjmam/8.1.1] analyzed this unstable configuration providing the equations needed to find, numerically, the exact growth rates for the ensuing Rayleigh-Taylor instability. He also derived an analytic but approximate solution. We point out a weakness in his approximate dispersion relation (DR) and offer a somewhat improved one. A third DR, based on transforming a planar DR into a spherical one, suffers no unphysical predictions and compares reasonably well with the exact work of Chandrasekhar and a more recent numerical analysis of the problem [Terrones and Carrara, Phys. Fluids 27, 054105 (2015), 10.1063/1.4921648].
NASA Astrophysics Data System (ADS)
Li, Q.; Wang, Y. L.; Li, H. C.; Zhang, M.; Li, C. Z.; Chen, X.
2017-12-01
Rainfall threshold plays an important role in flash flood warning. A simple and easy method, using Rational Equation to calculate rainfall threshold, was proposed in this study. The critical rainfall equation was deduced from the Rational Equation. On the basis of the Manning equation and the results of Chinese Flash Flood Survey and Evaluation (CFFSE) Project, the critical flow was obtained, and the net rainfall was calculated. Three aspects of the rainfall losses, i.e. depression storage, vegetation interception, and soil infiltration were considered. The critical rainfall was the sum of the net rainfall and the rainfall losses. Rainfall threshold was estimated after considering the watershed soil moisture using the critical rainfall. In order to demonstrate this method, Zuojiao watershed in Yunnan Province was chosen as study area. The results showed the rainfall thresholds calculated by the Rational Equation method were approximated to the rainfall thresholds obtained from CFFSE, and were in accordance with the observed rainfall during flash flood events. Thus the calculated results are reasonable and the method is effective. This study provided a quick and convenient way to calculated rainfall threshold of flash flood warning for the grass root staffs and offered technical support for estimating rainfall threshold.
Attitudes toward integration of complementary and alternative medicine with hospital-based care.
Lewis, D; Paterson, M; Beckerman, S; Sandilands, C
2001-12-01
To characterize those who have used, expect to use, or are opposed to the use of holistic therapies, especially in a conventional medical (hospital) setting. SAMPLE DESCRIPTION AND METHODS: Cross-sectional survey of a random sample of Hamilton-Wentworth residents between March and June 1998 (n = 416; response rate, 63%); analysis used logistic regression. Thirty-seven percent (37%) used at least one holistic therapy in the previous year: the three most common were chiropractic, massage, and herbal/phytology. The three most common reasons for use were general health, fatigue, and arthritis. Thirty-three percent (33%) would use holistic therapy in the future. Barriers to use were lack of information, perceived ineffectiveness, and cost; approximately 40% agreed they would only use holistic therapies with medical advice. Approximately 13% were opposed to holistic therapy and objected to its use in hospitals. Younger age, preference for holistic therapy over conventional medicine, and prior use of holism independently predicted high likelihood for future use. Lower income and high self-perceived health were associated with negative attitude toward use of holistic therapies in hospital. Most respondents would accept integration of holistic techniques into a hospital; therapies would be more acceptable if there were clear evidence of their efficacy. A few might find their opinion of a sponsoring hospital lowered by such integration.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Abd-Elhameed, W. M.
2005-09-01
We present a double ultraspherical spectral methods that allow the efficient approximate solution for the parabolic partial differential equations in a square subject to the most general inhomogeneous mixed boundary conditions. The differential equations with their boundary and initial conditions are reduced to systems of ordinary differential equations for the time-dependent expansion coefficients. These systems are greatly simplified by using tensor matrix algebra, and are solved by using the step-by-step method. Numerical applications of how to use these methods are described. Numerical results obtained compare favorably with those of the analytical solutions. Accurate double ultraspherical spectral approximations for Poisson's and Helmholtz's equations are also noted. Numerical experiments show that spectral approximation based on Chebyshev polynomials of the first kind is not always better than others based on ultraspherical polynomials.
Piecewise-homotopy analysis method (P-HAM) for first order nonlinear ODE
NASA Astrophysics Data System (ADS)
Chin, F. Y.; Lem, K. H.; Chong, F. S.
2013-09-01
In homotopy analysis method (HAM), the determination for the value of the auxiliary parameter h is based on the valid region of the h-curve in which the horizontal segment of the h-curve will decide the valid h-region. All h-value taken from the valid region, provided that the order of deformation is large enough, will in principle yield an approximation series that converges to the exact solution. However it is found out that the h-value chosen within this valid region does not always promise a good approximation under finite order. This paper suggests an improved method called Piecewise-HAM (P-HAM). In stead of a single h-value, this method suggests using many h-values. Each of the h-values comes from an individual h-curve while each h-curve is plotted by fixing the time t at a different value. Each h-value is claimed to produce a good approximation only about a neighborhood centered at the corresponding t which the h-curve is based on. Each segment of these good approximations is then joined to form the approximation curve. By this, the convergence region is enhanced further. The P-HAM is illustrated and supported by examples.
A public health decision support system model using reasoning methods.
Mera, Maritza; González, Carolina; Blobel, Bernd
2015-01-01
Public health programs must be based on the real health needs of the population. However, the design of efficient and effective public health programs is subject to availability of information that can allow users to identify, at the right time, the health issues that require special attention. The objective of this paper is to propose a case-based reasoning model for the support of decision-making in public health. The model integrates a decision-making process and case-based reasoning, reusing past experiences for promptly identifying new population health priorities. A prototype implementation of the model was performed, deploying the case-based reasoning framework jColibri. The proposed model contributes to solve problems found today when designing public health programs in Colombia. Current programs are developed under uncertain environments, as the underlying analyses are carried out on the basis of outdated and unreliable data.
NASA Technical Reports Server (NTRS)
Tsai, C.; Szabo, B. A.
1973-01-01
An approch to the finite element method which utilizes families of conforming finite elements based on complete polynomials is presented. Finite element approximations based on this method converge with respect to progressively reduced element sizes as well as with respect to progressively increasing orders of approximation. Numerical results of static and dynamic applications of plates are presented to demonstrate the efficiency of the method. Comparisons are made with plate elements in NASTRAN and the high-precision plate element developed by Cowper and his co-workers. Some considerations are given to implementation of the constraint method into general purpose computer programs such as NASTRAN.
NASA Astrophysics Data System (ADS)
Sandhu, Rajinder; Kaur, Jaspreet; Thapar, Vivek
2018-02-01
Dengue, also known as break-bone fever, is a tropical disease transmitted by mosquitoes. If the similarity between dengue infected users can be identified, it can help government's health agencies to manage the outbreak more effectively. To find similarity between cases affected by Dengue, user's personal and health information are the two fundamental requirements. Identification of similar symptoms, causes, effects, predictions and treatment procedures, is important. In this paper, an effective framework is proposed which finds similar patients suffering from dengue using keyword aware domain thesaurus and case base reasoning method. This paper focuses on the use of ontology dependent domain thesaurus technique to extract relevant keywords and then build cases with the help of case base reasoning method. Similar cases can be shared with users, nearby hospitals and health organizations to manage the problem more adequately. Two million case bases were generated to test the proposed similarity method. Experimental evaluations of proposed framework resulted in high accuracy and low error rate for finding similar cases of dengue as compared to UPCC and IPCC algorithms. The framework developed in this paper is for dengue but can easily be extended to other domains also.
Rough case-based reasoning system for continues casting
NASA Astrophysics Data System (ADS)
Su, Wenbin; Lei, Zhufeng
2018-04-01
The continuous casting occupies a pivotal position in the iron and steel industry. The rough set theory and the CBR (case based reasoning, CBR) were combined in the research and implementation for the quality assurance of continuous casting billet to improve the efficiency and accuracy in determining the processing parameters. According to the continuous casting case, the object-oriented method was applied to express the continuous casting cases. The weights of the attributes were calculated by the algorithm which was based on the rough set theory and the retrieval mechanism for the continuous casting cases was designed. Some cases were adopted to test the retrieval mechanism, by analyzing the results, the law of the influence of the retrieval attributes on determining the processing parameters was revealed. A comprehensive evaluation model was established by using the attribute recognition theory. According to the features of the defects, different methods were adopted to describe the quality condition of the continuous casting billet. By using the system, the knowledge was not only inherited but also applied to adjust the processing parameters through the case based reasoning method as to assure the quality of the continuous casting and improve the intelligent level of the continuous casting.
Reasons for discontinuation of reversible contraceptive methods by women with epilepsy.
Mandle, Hannah B; Cahill, Kaitlyn E; Fowler, Kristen M; Hauser, W Allen; Davis, Anne R; Herzog, Andrew G
2017-05-01
To report the reasons for discontinuation of contraceptive methods by women with epilepsy (WWE). These retrospective data come from a web-based survey regarding the contraceptive practices of 1,144 WWE in the community, ages 18-47 years. We determined the frequencies of contraceptive discontinuations and the reasons for discontinuation. We compared risk ratios for rates of discontinuation among contraceptive methods and categories. We used chi-square analysis to test the independence of discontinuation reasons among the various contraceptive methods and categories and when stratified by antiepileptic drug (AED) categories. Nine hundred fifty-nine of 2,393 (40.6%) individual, reversible contraceptive methods were discontinued. One-half (51.8%) of the WWE who discontinued a method discontinued at least two methods. Hormonal contraception was discontinued most often (553/1,091, 50.7%) with a risk ratio of 1.94 (1.54-2.45, p < 0.0001) compared to intrauterine devices (IUDs), the category that was discontinued the least (57/227, 25.1%). Among all individual methods, the contraceptive patch was stopped most often (79.7%) and the progestin-IUD was stopped the least (20.1%). The top three reasons for discontinuation among all methods were reliability concerns (13.9%), menstrual problems (13.5%), and increased seizures (8.6%). There were significant differences among discontinuation rates and reasons when stratified by AED category for hormonal contraception but not for any other contraceptive category. Contraception counseling for WWE should consider the special experience profiles that are unique to this special population on systemic hormonal contraception. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Fast, large-scale hologram calculation in wavelet domain
NASA Astrophysics Data System (ADS)
Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi
2018-04-01
We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.
BRYNTRN: A baryon transport model
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.
1989-01-01
The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.
Fuzzy Behavior-Based Navigation for Planetary
NASA Technical Reports Server (NTRS)
Tunstel, Edward; Danny, Harrison; Lippincott, Tanya; Jamshidi, Mo
1997-01-01
Adaptive behavioral capabilities are necessary for robust rover navigation in unstructured and partially-mapped environments. A control approach is described which exploits the approximate reasoning capability of fuzzy logic to produce adaptive motion behavior. In particular, a behavior-based architecture for hierarchical fuzzy control of microrovers is presented. Its structure is described, as well as mechanisms of control decision-making which give rise to adaptive behavior. Control decisions for local navigation result from a consensus of recommendations offered only by behaviors that are applicable to current situations. Simulation predicts the navigation performance on a microrover in simplified Mars-analog terrain.
NONLINEAR MULTIGRID SOLVER EXPLOITING AMGe COARSE SPACES WITH APPROXIMATION PROPERTIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Max La Cour; Villa, Umberto E.; Engsig-Karup, Allan P.
The paper introduces a nonlinear multigrid solver for mixed nite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstruc- tured problems is the guaranteed approximation property of the AMGe coarse spaces that were developed recently at Lawrence Livermore National Laboratory. These give the ability to derive stable and accurate coarse nonlinear discretization problems. The previous attempts (including ones with the original AMGe method, [5, 11]), were less successful due to lack of such good approximation properties of the coarse spaces. With coarse spaces with approximation properties, ourmore » FAS approach on un- structured meshes should be as powerful/successful as FAS on geometrically re ned meshes. For comparison, Newton's method and Picard iterations with an inner state-of-the-art linear solver is compared to FAS on a nonlinear saddle point problem with applications to porous media ow. It is demonstrated that FAS is faster than Newton's method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate, providing a solver with the potential for mesh-independent convergence on general unstructured meshes.« less
Test particle propagation in magnetostatic turbulence. 2: The local approximation method
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.
1976-01-01
An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-10
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
Fully decoupled monolithic projection method for natural convection problems
NASA Astrophysics Data System (ADS)
Pan, Xiaomin; Kim, Kyoungyoun; Lee, Changhoon; Choi, Jung-Il
2017-04-01
To solve time-dependent natural convection problems, we propose a fully decoupled monolithic projection method. The proposed method applies the Crank-Nicolson scheme in time and the second-order central finite difference in space. To obtain a non-iterative monolithic method from the fully discretized nonlinear system, we first adopt linearizations of the nonlinear convection terms and the general buoyancy term with incurring second-order errors in time. Approximate block lower-upper decompositions, along with an approximate factorization technique, are additionally employed to a global linearly coupled system, which leads to several decoupled subsystems, i.e., a fully decoupled monolithic procedure. We establish global error estimates to verify the second-order temporal accuracy of the proposed method for velocity, pressure, and temperature in terms of a discrete l2-norm. Moreover, according to the energy evolution, the proposed method is proved to be stable if the time step is less than or equal to a constant. In addition, we provide numerical simulations of two-dimensional Rayleigh-Bénard convection and periodic forced flow. The results demonstrate that the proposed method significantly mitigates the time step limitation, reduces the computational cost because only one Poisson equation is required to be solved, and preserves the second-order temporal accuracy for velocity, pressure, and temperature. Finally, the proposed method reasonably predicts a three-dimensional Rayleigh-Bénard convection for different Rayleigh numbers.
Approximated maximum likelihood estimation in multifractal random walks
NASA Astrophysics Data System (ADS)
Løvsletten, O.; Rypdal, M.
2012-04-01
We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.
Hoang, Tuan; Tran, Dat; Huang, Xu
2013-01-01
Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.
Deconstructing climate misinformation to identify reasoning errors
NASA Astrophysics Data System (ADS)
Cook, John; Ellerton, Peter; Kinkead, David
2018-02-01
Misinformation can have significant societal consequences. For example, misinformation about climate change has confused the public and stalled support for mitigation policies. When people lack the expertise and skill to evaluate the science behind a claim, they typically rely on heuristics such as substituting judgment about something complex (i.e. climate science) with judgment about something simple (i.e. the character of people who speak about climate science) and are therefore vulnerable to misleading information. Inoculation theory offers one approach to effectively neutralize the influence of misinformation. Typically, inoculations convey resistance by providing people with information that counters misinformation. In contrast, we propose inoculating against misinformation by explaining the fallacious reasoning within misleading denialist claims. We offer a strategy based on critical thinking methods to analyse and detect poor reasoning within denialist claims. This strategy includes detailing argument structure, determining the truth of the premises, and checking for validity, hidden premises, or ambiguous language. Focusing on argument structure also facilitates the identification of reasoning fallacies by locating them in the reasoning process. Because this reason-based form of inoculation is based on general critical thinking methods, it offers the distinct advantage of being accessible to those who lack expertise in climate science. We applied this approach to 42 common denialist claims and find that they all demonstrate fallacious reasoning and fail to refute the scientific consensus regarding anthropogenic global warming. This comprehensive deconstruction and refutation of the most common denialist claims about climate change is designed to act as a resource for communicators and educators who teach climate science and/or critical thinking.
Short-term solar flare prediction using image-case-based reasoning
NASA Astrophysics Data System (ADS)
Liu, Jin-Fu; Li, Fei; Zhang, Huai-Peng; Yu, Da-Ren
2017-10-01
Solar flares strongly influence space weather and human activities, and their prediction is highly complex. The existing solutions such as data based approaches and model based approaches have a common shortcoming which is the lack of human engagement in the forecasting process. An image-case-based reasoning method is introduced to achieve this goal. The image case library is composed of SOHO/MDI longitudinal magnetograms, the images from which exhibit the maximum horizontal gradient, the length of the neutral line and the number of singular points that are extracted for retrieving similar image cases. Genetic optimization algorithms are employed for optimizing the weight assignment for image features and the number of similar image cases retrieved. Similar image cases and prediction results derived by majority voting for these similar image cases are output and shown to the forecaster in order to integrate his/her experience with the final prediction results. Experimental results demonstrate that the case-based reasoning approach has slightly better performance than other methods, and is more efficient with forecasts improved by humans.
Applying temporal abstraction and case-based reasoning to predict approaching influenza waves.
Schmidt, Rainer; Gierl, Lothar
2002-01-01
The goal of the TeCoMed project is to send early warnings against forthcoming waves or even epidemics of infectious diseases, especially of influenza, to interested practitioners, pharmacists etc. in the German federal state Mecklenburg-Western Pomerania. The forecast of these waves is based on written confirmations of unfitness for work of the main German health insurance company. Since influenza waves are difficult to predict because of their cyclic but not regular behaviour, statistical methods based on the computation of mean values are not helpful. Instead, we have developed a prognostic model that makes use of similar former courses. Our method combines Case-based Reasoning with Temporal Abstraction to decide whether early warning is appropriate.
Adaptive photoacoustic imaging quality optimization with EMD and reconstruction
NASA Astrophysics Data System (ADS)
Guo, Chengwen; Ding, Yao; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul L.
2016-10-01
Biomedical photoacoustic (PA) signal is characterized with extremely low signal to noise ratio which will yield significant artifacts in photoacoustic tomography (PAT) images. Since PA signals acquired by ultrasound transducers are non-linear and non-stationary, traditional data analysis methods such as Fourier and wavelet method cannot give useful information for further research. In this paper, we introduce an adaptive method to improve the quality of PA imaging based on empirical mode decomposition (EMD) and reconstruction. Data acquired by ultrasound transducers are adaptively decomposed into several intrinsic mode functions (IMFs) after a sifting pre-process. Since noise is randomly distributed in different IMFs, depressing IMFs with more noise while enhancing IMFs with less noise can effectively enhance the quality of reconstructed PAT images. However, searching optimal parameters by means of brute force searching algorithms will cost too much time, which prevent this method from practical use. To find parameters within reasonable time, heuristic algorithms, which are designed for finding good solutions more efficiently when traditional methods are too slow, are adopted in our method. Two of the heuristic algorithms, Simulated Annealing Algorithm, a probabilistic method to approximate the global optimal solution, and Artificial Bee Colony Algorithm, an optimization method inspired by the foraging behavior of bee swarm, are selected to search optimal parameters of IMFs in this paper. The effectiveness of our proposed method is proved both on simulated data and PA signals from real biomedical tissue, which might bear the potential for future clinical PA imaging de-noising.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Aspects Topologiques de la Theorie des Champs et leurs Applications
NASA Astrophysics Data System (ADS)
Caenepeel, Didier
This thesis is dedicated to the study of various topological aspects of field theory, and is divided in three parts. In two space dimensions the possibility of fractional statistics can be implemented by adding an appropriate "fictitious" electric charge and magnetic flux to each particle (after which they are known as anyons). Since the statistical interaction is rather difficult to handle, a mean-field approximation is used in order to describe a gas of anyons. We derive a criterion for the validity of this approximation using the inherent feature of parity violation in the scattering of anyons. We use this new method in various examples of anyons and show both analytically and numerically that the approximation is justified if the statistical interaction is weak, and that it must be more weak for boson-based than for fermion-based anyons. Chern-Simons theories give an elegant implementation of anyonic properties in field theories, which permits the emergence of new mechanisms for anyon superconductivity. Since it is reasonable to think that superconductivity is a low energy phenomenon, we have been interested in non-relativistic C-S systems. We present the scalar field effective potential for non-relativistic matter coupled to both Abelian and non-Abelian C-S gauge fields. We perform the calculations using functional methods in background fields. Finally, we compute the scalar effective potential in various gauges and treat divergences with various regularization schemes. In three space dimensions, a generalization of Chern-Simons theory may be achieved by introducing an antisymmetric tensor gauge field. We use these theories, called B wedge F theories, to present an alternative to the Higgs mechanism to generate masses for non-Abelian gauge fields. The initial Lagrangian is composed of a fermion with current-current and dipole-dipole type self -interactions minimally coupled to non-Abelian gauge fields. The mass generation occurs upon the fermionic functional integration. We show that by suitably adjusting the coupling constants the effective theory contains massive non-Abelian gauge fields without any residual scalars or other degrees of freedom.
The Effect of Problem-Solving Video Games on the Science Reasoning Skills of College Students
NASA Astrophysics Data System (ADS)
Fanetti, Tina M.
As the world continues to rapidly change, students are faced with the need to develop flexible skills, such as science reasoning that will help them thrive in the new knowledge economy. Prensky (2001), Gee (2003), and Van Eck (2007) have all suggested that the way to engage learners and teach them the necessary skills is through digital games, but empirical studies focusing on popular games are scant. One way digital games, especially video games, could potentially be useful if there were a flexible and inexpensive method a student could use at their convenience to improve selected science reasoning skills. Problem-solving video games, which require the use of reasoning and problem solving to answer a variety of cognitive challenges could be a promising method to improve selected science reasoning skills. Using think-aloud protocols and interviews, a qualitative study was carried out with a small sample of college students to examine what impact two popular video games, Professor Layton and the Curious Village and Professor Layton and the Diabolical Box, had on specific science reasoning skills. The subject classified as an expert in both gaming and reasoning tended to use more higher order thinking and reasoning skills than the novice reasoners. Based on the assessments, the science reasoning of college students did not improve during the course of game play. Similar to earlier studies, students tended to use trial and error as their primary method of solving the various puzzles in the game and additionally did not recognize when to use the appropriate reasoning skill to solve a puzzle, such as proportional reasoning.
NASA Astrophysics Data System (ADS)
Kacprzak, T.; Herbel, J.; Amara, A.; Réfrégier, A.
2018-02-01
Approximate Bayesian Computation (ABC) is a method to obtain a posterior distribution without a likelihood function, using simulations and a set of distance metrics. For that reason, it has recently been gaining popularity as an analysis tool in cosmology and astrophysics. Its drawback, however, is a slow convergence rate. We propose a novel method, which we call qABC, to accelerate ABC with Quantile Regression. In this method, we create a model of quantiles of distance measure as a function of input parameters. This model is trained on a small number of simulations and estimates which regions of the prior space are likely to be accepted into the posterior. Other regions are then immediately rejected. This procedure is then repeated as more simulations are available. We apply it to the practical problem of estimation of redshift distribution of cosmological samples, using forward modelling developed in previous work. The qABC method converges to nearly same posterior as the basic ABC. It uses, however, only 20% of the number of simulations compared to basic ABC, achieving a fivefold gain in execution time for our problem. For other problems the acceleration rate may vary; it depends on how close the prior is to the final posterior. We discuss possible improvements and extensions to this method.
Sparse approximation problem: how rapid simulated annealing succeeds and fails
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Kabashima, Yoshiyuki
2016-03-01
Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.
An approximation function for frequency constrained structural optimization
NASA Technical Reports Server (NTRS)
Canfield, R. A.
1989-01-01
The purpose is to examine a function for approximating natural frequency constraints during structural optimization. The nonlinearity of frequencies has posed a barrier to constructing approximations for frequency constraints of high enough quality to facilitate efficient solutions. A new function to represent frequency constraints, called the Rayleigh Quotient Approximation (RQA), is presented. Its ability to represent the actual frequency constraint results in stable convergence with effectively no move limits. The objective of the optimization problem is to minimize structural weight subject to some minimum (or maximum) allowable frequency and perhaps subject to other constraints such as stress, displacement, and gage size, as well. A reason for constraining natural frequencies during design might be to avoid potential resonant frequencies due to machinery or actuators on the structure. Another reason might be to satisy requirements of an aircraft or spacecraft's control law. Whatever the structure supports may be sensitive to a frequency band that must be avoided. Any of these situations or others may require the designer to insure the satisfaction of frequency constraints. A further motivation for considering accurate approximations of natural frequencies is that they are fundamental to dynamic response constraints.
Magnetic probing of the solar interior
NASA Technical Reports Server (NTRS)
Benton, E. R.; Estes, R. H.
1985-01-01
The magnetic field patterns in the region beneath the solar photosphere is determined. An approximate method for downward extrapolation of line of sight magnetic field measurements taken at the solar photosphere was developed. It utilizes the mean field theory of electromagnetism in a form thought to be appropriate for the solar convection zone. A way to test that theory is proposed. The straightforward application of the lowest order theory with the complete model fit to these data does not indicate the existence of any reasonable depth at which flux conservation is achieved.
Scattering by ensembles of small particles experiment, theory and application
NASA Technical Reports Server (NTRS)
Gustafson, B. A. S.
1980-01-01
A hypothetical self consistent picture of evolution of prestellar intertellar dust through a comet phase leads to predictions about the composition of the circum-solar dust cloud. Scattering properties of thus resulting conglomerates with a bird's-nest type of structure are investigated using a micro-wave analogue technique. Approximate theoretical methods of general interest are developed which compared favorably with the experimental results. The principal features of scattering of visible radiation by zodiacal light particles are reasonably reproduced. A component which is suggestive of (ALPHA)-meteoroids is also predicted.
NASA Technical Reports Server (NTRS)
Loane, J. T.; Bowhill, S. A.; Mayes, P. E.
1982-01-01
The effects of atmospheric turbulence and the basis for the coherent scatter radar techniques are discussed. The reasons are given for upgrading the Radar system to a larger steerable array. Phase array theory pertinent to the system design is reviewed, along with approximations for maximum directive gain and blind angles due to mutual coupling. The methods and construction techniques employed in the UHF model study are explained. The antenna range is described, with a block diagram for the mode of operation used.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
Feng, Hao; Ashkar, Rana; Steinke, Nina; ...
2018-02-01
A method dubbed grating-based holography was recently used to determine the structure of colloidal fluids in the rectangular grooves of a diffraction grating from X-ray scattering measurements. Similar grating-based measurements have also been recently made with neutrons using a technique called spin-echo small-angle neutron scattering. The analysis of the X-ray diffraction data was done using an approximation that treats the X-ray phase change caused by the colloidal structure as a small perturbation to the overall phase pattern generated by the grating. In this paper, the adequacy of this weak phase approximation is explored for both X-ray and neutron grating holography.more » Additionally, it is found that there are several approximations hidden within the weak phase approximation that can lead to incorrect conclusions from experiments. In particular, the phase contrast for the empty grating is a critical parameter. Finally, while the approximation is found to be perfectly adequate for X-ray grating holography experiments performed to date, it cannot be applied to similar neutron experiments because the latter technique requires much deeper grating channels.« less
NASA Technical Reports Server (NTRS)
Hartung, Lin C.
1991-01-01
A method for predicting radiation adsorption and emission coefficients in thermochemical nonequilibrium flows is developed. The method is called the Langley optimized radiative nonequilibrium code (LORAN). It applies the smeared band approximation for molecular radiation to produce moderately detailed results and is intended to fill the gap between detailed but costly prediction methods and very fast but highly approximate methods. The optimization of the method to provide efficient solutions allowing coupling to flowfield solvers is discussed. Representative results are obtained and compared to previous nonequilibrium radiation methods, as well as to ground- and flight-measured data. Reasonable agreement is found in all cases. A multidimensional radiative transport method is also developed for axisymmetric flows. Its predictions for wall radiative flux are 20 to 25 percent lower than those of the tangent slab transport method, as expected, though additional investigation of the symmetry and outflow boundary conditions is indicated. The method was applied to the peak heating condition of the aeroassist flight experiment (AFE) trajectory, with results comparable to predictions from other methods. The LORAN method was also applied in conjunction with the computational fluid dynamics (CFD) code LAURA to study the sensitivity of the radiative heating prediction to various models used in nonequilibrium CFD. This study suggests that radiation measurements can provide diagnostic information about the detailed processes occurring in a nonequilibrium flowfield because radiation phenomena are very sensitive to these processes.
NASA Astrophysics Data System (ADS)
Stoitsov, M. V.; Schunck, N.; Kortelainen, M.; Michel, N.; Nam, H.; Olsen, E.; Sarich, J.; Wild, S.
2013-06-01
We describe the new version 2.00d of the code HFBTHO that solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogoliubov (HFB) problem by using the cylindrical transformed deformed harmonic oscillator basis. In the new version, we have implemented the following features: (i) the modified Broyden method for non-linear problems, (ii) optional breaking of reflection symmetry, (iii) calculation of axial multipole moments, (iv) finite temperature formalism for the HFB method, (v) linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations, (vi) blocking of quasi-particles in the Equal Filling Approximation (EFA), (vii) framework for generalized energy density with arbitrary density-dependences, and (viii) shared memory parallelism via OpenMP pragmas. Program summaryProgram title: HFBTHO v2.00d Catalog identifier: ADUI_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUI_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 167228 No. of bytes in distributed program, including test data, etc.: 2672156 Distribution format: tar.gz Programming language: FORTRAN-95. Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT5, Cray XE6. Operating system: UNIX, LINUX, WindowsXP. RAM: 200 Mwords Word size: 8 bits Classification: 17.22. Does the new version supercede the previous version?: Yes Catalog identifier of previous version: ADUI_v1_0 Journal reference of previous version: Comput. Phys. Comm. 167 (2005) 43 Nature of problem: The solution of self-consistent mean-field equations for weakly-bound paired nuclei requires a correct description of the asymptotic properties of nuclear quasi-particle wave functions. In the present implementation, this is achieved by using the single-particle wave functions of the transformed harmonic oscillator, which allows for an accurate description of deformation effects and pairing correlations in nuclei arbitrarily close to the particle drip lines. Solution method: The program uses the axial Transformed Harmonic Oscillator (THO) single- particle basis to expand quasi-particle wave functions. It iteratively diagonalizes the Hartree-Fock-Bogoliubov Hamiltonian based on generalized Skyrme-like energy densities and zero-range pairing interactions until a self-consistent solution is found. A previous version of the program was presented in: M.V. Stoitsov, J. Dobaczewski, W. Nazarewicz, P. Ring, Comput. Phys. Commun. 167 (2005) 43-63. Reasons for new version: Version 2.00d of HFBTHO provides a number of new options such as the optional breaking of reflection symmetry, the calculation of axial multipole moments, the finite temperature formalism for the HFB method, optimized multi-constraint calculations, the treatment of odd-even and odd-odd nuclei in the blocking approximation, and the framework for generalized energy density with arbitrary density-dependences. It is also the first version of HFBTHO to contain threading capabilities. Summary of revisions: The modified Broyden method has been implemented, Optional breaking of reflection symmetry has been implemented, The calculation of all axial multipole moments up to λ=8 has been implemented, The finite temperature formalism for the HFB method has been implemented, The linear constraint method based on the approximation of the Random Phase Approximation (RPA) matrix for multi-constraint calculations has been implemented, The blocking of quasi-particles in the Equal Filling Approximation (EFA) has been implemented, The framework for generalized energy density functionals with arbitrary density-dependence has been implemented, Shared memory parallelism via OpenMP pragmas has been implemented. Restrictions: Axial- and time-reversal symmetries are assumed. Unusual features: The user must have access to the LAPACK subroutines DSYEVD, DSYTRF and DSYTRI, and their dependences, which compute eigenvalues and eigenfunctions of real symmetric matrices, the LAPACK subroutines DGETRI and DGETRF, which invert arbitrary real matrices, and the BLAS routines DCOPY, DSCAL, DGEMM and DGEMV for double-precision linear algebra (or provide another set of subroutines that can perform such tasks). The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Running time: Highly variable, as it depends on the nucleus, size of the basis, requested accuracy, requested configuration, compiler and libraries, and hardware architecture. An order of magnitude would be a few seconds for ground-state configurations in small bases N≈8-12, to a few minutes in very deformed configuration of a heavy nucleus with a large basis N>20.
40 CFR 1502.22 - Incomplete or unavailable information.
Code of Federal Regulations, 2013 CFR
2013-07-01
... approaches or research methods generally accepted in the scientific community. For the purposes of this... credible scientific evidence which is relevant to evaluating the reasonably foreseeable significant adverse... scientific evidence, is not based on pure conjecture, and is within the rule of reason. (c) The amended...
40 CFR 1502.22 - Incomplete or unavailable information.
Code of Federal Regulations, 2014 CFR
2014-07-01
... approaches or research methods generally accepted in the scientific community. For the purposes of this... credible scientific evidence which is relevant to evaluating the reasonably foreseeable significant adverse... scientific evidence, is not based on pure conjecture, and is within the rule of reason. (c) The amended...
40 CFR 1502.22 - Incomplete or unavailable information.
Code of Federal Regulations, 2012 CFR
2012-07-01
... approaches or research methods generally accepted in the scientific community. For the purposes of this... credible scientific evidence which is relevant to evaluating the reasonably foreseeable significant adverse... scientific evidence, is not based on pure conjecture, and is within the rule of reason. (c) The amended...
40 CFR 1502.22 - Incomplete or unavailable information.
Code of Federal Regulations, 2010 CFR
2010-07-01
... approaches or research methods generally accepted in the scientific community. For the purposes of this... credible scientific evidence which is relevant to evaluating the reasonably foreseeable significant adverse... scientific evidence, is not based on pure conjecture, and is within the rule of reason. (c) The amended...
40 CFR 1502.22 - Incomplete or unavailable information.
Code of Federal Regulations, 2011 CFR
2011-07-01
... approaches or research methods generally accepted in the scientific community. For the purposes of this... credible scientific evidence which is relevant to evaluating the reasonably foreseeable significant adverse... scientific evidence, is not based on pure conjecture, and is within the rule of reason. (c) The amended...
NASA Technical Reports Server (NTRS)
Shambayati, Shervin
2001-01-01
In order to evaluate performance of strong channel codes in presence of imperfect carrier phase tracking for residual carrier BPSK modulation in this paper an approximate 'brick wall' model is developed which is independent of the channel code type for high data rates. It is shown that this approximation is reasonably accurate (less than 0.7dB for low FERs for (1784,1/6) code and less than 0.35dB for low FERs for (5920,1/6) code). Based on the approximation's accuracy, it is concluded that the effects of imperfect carrier tracking are more or less independent of the channel code type for strong channel codes. Therefore, the advantage that one strong channel code has over another with perfect carrier tracking translates to nearly the same advantage under imperfect carrier tracking conditions. This will allow the link designers to incorporate projected channel code performance of strong channel codes into their design tables without worrying about their behavior in the face of imperfect carrier phase tracking.
Investigating Geosparql Requirements for Participatory Urban Planning
NASA Astrophysics Data System (ADS)
Mohammadi, E.; Hunter, A. J. S.
2015-06-01
We propose that participatory GIS (PGIS) activities including participatory urban planning can be made more efficient and effective if spatial reasoning rules are integrated with PGIS tools to simplify engagement for public contributors. Spatial reasoning is used to describe relationships between spatial entities. These relationships can be evaluated quantitatively or qualitatively using geometrical algorithms, ontological relations, and topological methods. Semantic web services utilize tools and methods that can facilitate spatial reasoning. GeoSPARQL, introduced by OGC, is a spatial reasoning standard used to make declarations about entities (graphical contributions) that take the form of a subject-predicate-object triple or statement. GeoSPARQL uses three basic methods to infer topological relationships between spatial entities, including: OGC's simple feature topology, RCC8, and the DE-9IM model. While these methods are comprehensive in their ability to define topological relationships between spatial entities, they are often inadequate for defining complex relationships that exist in the spatial realm. Particularly relationships between urban entities, such as those between a bus route, the collection of associated bus stops and their overall surroundings as an urban planning pattern. In this paper we investigate common qualitative spatial reasoning methods as a preliminary step to enhancing the capabilities of GeoSPARQL in an online participatory GIS framework in which reasoning is used to validate plans based on standard patterns that can be found in an efficient/effective urban environment.
Loomis, John H.; Richardson, Leslie; Kroeger, Timm; Casey, Frank
2014-01-01
Ecosystem goods and services are now widely recognized as the benefits that humans derive from the natural environment around them including abiotic (e.g. atmosphere) and biotic components. The work by Costanza et al. (1997) to value the world’s ecosystem services brought the concept of ecosystem service valuation to the attention of the world press and environmental economists working in the area of non-market valuation. The article’s US$33 trillion estimate of these services, despite world GDP being only US$18 trillion, was definitely headline grabbing. This ambitious effort was undertaken with reliance on transferring existing values per unit from other (often site specific) valuation studies. Benefit transfer (see Boyle and Bergstrom, 1992; Rosenberger and Loomis, 2000, 2001) involves transfers of values per unit from an area that has been valued using primary valuation methods such as contingent valuation, travel cost or hedonic property methods (Champ et al., 2003) to areas for which values are needed. Benefit transfer often provides a reasonable approximation of the benefit of unstudied ecosystem services based on transfer of benefits estimates per unit (per visitor day, per acre) from existing studies. An appropriate benefit transfer should be performed on the same spatial scale of analysis (e.g. reservoir to reservoir, city to city) as the original study. However, the reasonableness of benefit transfer may be strained when applying locally derived per acre values from studies of several thousand acres of a resource such as wetlands to hundreds of millions of acres of wetlands.
More on the alleged 1970 geomagnetic jerk
Alldredge, L.R.
1985-01-01
French and United Kingdom workers have published reports describing a sudden change in the secular acceleration, called an impulse or a jerk, which took place around 1970. They claim that this change took place in a period of a year or two and that the sources of the alleged jerk are internal. An earlier paper by this author questioned their method of analysis pointing out that their method of piecemeal fitting of parabolas to the data will always create a discontinuity in the secular acceleration where the parabolas join and that the place where the parabolas join is an a priori assumption and not a result of the analysis. This paper gives a very brief summary of this first paper and then adds additional reasons for questioning the allegation that there was a worldwide sudden jerk in the magnetic field of internal origin around 1970. These new reasons are based largely on new field models which give cubic approximations of the field right through the 1970 timeframe and therefore have no discontinuities in the second derivative (jerk) around 1970. Some recent Japanese work shows several sudden changes in the secular variation pattern which cover limited areas and do not seem to be closely related to each other or to the irregularity noted in the European area near 1970. The secular variation picture which seems to be emerging is one with many local or limited-regional secular variation changes which appear to be almost unrelated to each other in time or space. A worldwide spherical harmonic model including coefficients up to degree 13 could never properly depict such a situation. ?? 1985.
NASA Technical Reports Server (NTRS)
Ito, K.
1983-01-01
Approximation schemes based on Legendre-tau approximation are developed for application to parameter identification problem for delay and partial differential equations. The tau method is based on representing the approximate solution as a truncated series of orthonormal functions. The characteristic feature of the Legendre-tau approach is that when the solution to a problem is infinitely differentiable, the rate of convergence is faster than any finite power of 1/N; higher accuracy is thus achieved, making the approach suitable for small N.
Melse-Boonstra, A; Rexwinkel, H; Bulux, J; Solomons, N W; West, C E
1999-04-01
To compare methods for estimating discretionary salt intake, that is, salt added during food preparation and consumption in the home. The study was carried out in a rural Guatemalan village. Subjects were selected non-randomly, based on their willingness to cooperate. Nine mother-son dyads participated; the sons were aged 6-9 y. Three approaches for estimating the discretionary salt consumption were used: 24 h recall; collection of duplicate portions of salt; and urinary excretion of lithium during consumption of lithium-labelled household salt. Total salt intake was assessed from the excretion of chloride over 24 h. The mean discretionary salt consumption based on lithium excretion for mothers was 3.9+/-2.0 g/d (mean +/- s.d.) and for children 1.3+/-0.6 g/d. Estimates from the 24 h recalls and from the duplicate portion method were approximately twice and three times those measured with the lithium-marker technique respectively. The salt intake estimated from the recall method was associated with the lithium-marker technique for both mothers and children (Spearman correlation coefficient, 0.76 and 0.70 respectively). The mean daily coefficient of variation in consumption of discretionary salt measured by the three methods, for mothers and boys respectively, were: lithium marker, 51.7 and 43.7%; 24 h recall, 65.8 and 50.7%; and duplicate portion, 51.0 and 62.6%. We conclude that an interview method for estimating discretionary salt intake may be a reasonable approach for determining the relative rank-order in a population, especially among female food preparers themselves, but may grossly overestimate the actual intake of salt added during food preparation and consumption.
SCF-Xα-SW electron densities with the overlapping sphere approximation
NASA Astrophysics Data System (ADS)
McMaster, Blair N.; Smith, Vedene H., Jr.; Salahub, Dennis R.
Self consistent field-Xα-scattered wave (SCF-Xα-SW) calculations have been performed for a series of eight first and second row homonuclear diatomic molecules using both the touching (TS) and 25 per cent overlapping sphere (OS) versions. The OS deformation density maps exhibit much better quantitative agreement with those from other Xα methods, which do not employ the spherical muffin-tin (MT) potential approximation, than do the TS maps. The OS version thus compensates very effectively for the errors involved in the MT approximation in computing electron densities. A detailed comparison between the TS- and OS-Xα-SW orbitals reveals that the reasons for this improvement are surprisingly specific. The dominant effect of the OS approximation is to increase substantially the electron density near the midpoint of bonding σ orbitals, with a consequent reduction of the density behind the atoms. A similar effect occurs for the bonding π orbitals but is less pronounced. These effects are due to a change in hybridization of the orbitals, with the OS approximation increasing the proportion of the subdominant partial waves and hence changing the shapes of the orbitals. It is this increased orbital polarization which so effectively compensates for the lack of (non-spherically symmetric) polarization components in the MT potential, when overlapping spheres are used.
Holland, James V; Hardie, Kate; de Dassel, Jessica; Ralph, Anna P
2018-01-01
Abstract Background Prevention of rheumatic heart disease (RHD) remains challenging in high-burden settings globally. After acute rheumatic fever (ARF), secondary antibiotic prophylaxis is required to prevent RHD. International guidelines on recommended durations of secondary prophylaxis differ, with scope for clinician discretion. Because ARF risk decreases with age, ongoing prophylaxis is generally considered unnecessary beyond approximately the third decade. Concordance with guidelines on timely cessation of prophylaxis is unknown. Methods We undertook a register-based audit to determine the appropriateness of antibiotic prophylaxis among clients aged ≥35 years in Australia’s Northern Territory. Data on demographics, ARF episode(s), RHD severity, prophylaxis type, and relevant clinical notes were extracted. The determination of guideline concordance was based on whether (1) national guidelines were followed; (2) a reason for departure from guidelines was documented; (3) lifelong continuation was considered appropriate in all cases of severe RHD. Results We identified 343 clients aged ≥35 years prescribed secondary prophylaxis. Guideline concordance was 39% according to national guidelines, 68% when documented reasons for departures from guidelines were included and 82% if patients with severe RHD were deemed to need lifelong prophylaxis. Shorter times since last echocardiogram or cardiologist review were associated with greater likelihood of guideline concordance (P < .001). The median time since last ARF was 5.9 years in the guideline-concordant group and 24.0 years in the nonconcordant group (P < .001). Thirty-two people had an ARF episode after age 40 years. Conclusions In this setting, appropriate discontinuation of RHD prophylaxis could be improved through timely specialist review to reduce unnecessary burden on clients and health systems.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
NASA Astrophysics Data System (ADS)
Wang, Jing; Yang, Tianyu; Staskevich, Gennady; Abbe, Brian
2017-04-01
This paper studies the cooperative control problem for a class of multiagent dynamical systems with partially unknown nonlinear system dynamics. In particular, the control objective is to solve the state consensus problem for multiagent systems based on the minimisation of certain cost functions for individual agents. Under the assumption that there exist admissible cooperative controls for such class of multiagent systems, the formulated problem is solved through finding the optimal cooperative control using the approximate dynamic programming and reinforcement learning approach. With the aid of neural network parameterisation and online adaptive learning, our method renders a practically implementable approximately adaptive neural cooperative control for multiagent systems. Specifically, based on the Bellman's principle of optimality, the Hamilton-Jacobi-Bellman (HJB) equation for multiagent systems is first derived. We then propose an approximately adaptive policy iteration algorithm for multiagent cooperative control based on neural network approximation of the value functions. The convergence of the proposed algorithm is rigorously proved using the contraction mapping method. The simulation results are included to validate the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Chui, Siu Lit; Lu, Ya Yan
2004-03-01
Wide-angle full-vector beam propagation methods (BPMs) for three-dimensional wave-guiding structures can be derived on the basis of rational approximants of a square root operator or its exponential (i.e., the one-way propagator). While the less accurate BPM based on the slowly varying envelope approximation can be efficiently solved by the alternating direction implicit (ADI) method, the wide-angle variants involve linear systems that are more difficult to handle. We present an efficient solver for these linear systems that is based on a Krylov subspace method with an ADI preconditioner. The resulting wide-angle full-vector BPM is used to simulate the propagation of wave fields in a Y branch and a taper.
Chui, Siu Lit; Lu, Ya Yan
2004-03-01
Wide-angle full-vector beam propagation methods (BPMs) for three-dimensional wave-guiding structures can be derived on the basis of rational approximants of a square root operator or its exponential (i.e., the one-way propagator). While the less accurate BPM based on the slowly varying envelope approximation can be efficiently solved by the alternating direction implicit (ADI) method, the wide-angle variants involve linear systems that are more difficult to handle. We present an efficient solver for these linear systems that is based on a Krylov subspace method with an ADI preconditioner. The resulting wide-angle full-vector BPM is used to simulate the propagation of wave fields in a Y branch and a taper.
Polynomial probability distribution estimation using the method of moments
Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
Bayesian estimation of a source term of radiation release with approximately known nuclide ratios
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek
2016-04-01
We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems
NASA Astrophysics Data System (ADS)
Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong
2017-09-01
In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.
A new approach to estimate parameters of speciation models with application to apes.
Becquet, Celine; Przeworski, Molly
2007-10-01
How populations diverge and give rise to distinct species remains a fundamental question in evolutionary biology, with important implications for a wide range of fields, from conservation genetics to human evolution. A promising approach is to estimate parameters of simple speciation models using polymorphism data from multiple loci. Existing methods, however, make a number of assumptions that severely limit their applicability, notably, no gene flow after the populations split and no intralocus recombination. To overcome these limitations, we developed a new Markov chain Monte Carlo method to estimate parameters of an isolation-migration model. The approach uses summaries of polymorphism data at multiple loci surveyed in a pair of diverging populations or closely related species and, importantly, allows for intralocus recombination. To illustrate its potential, we applied it to extensive polymorphism data from populations and species of apes, whose demographic histories are largely unknown. The isolation-migration model appears to provide a reasonable fit to the data. It suggests that the two chimpanzee species became reproductively isolated in allopatry approximately 850 Kya, while Western and Central chimpanzee populations split approximately 440 Kya but continued to exchange migrants. Similarly, Eastern and Western gorillas and Sumatran and Bornean orangutans appear to have experienced gene flow since their splits approximately 90 and over 250 Kya, respectively.
Sternal approximation for bilateral anterolateral transsternal thoracotomy for lung transplantation.
McGiffin, David C; Alonso, Jorge E; Zorn, George L; Kirklin, James K; Young, K Randall; Wille, Keith M; Leon, Kevin; Hart, Katherine
2005-02-01
The traditional incision for bilateral sequential lung transplantation is the bilateral anterolateral transsternal thoracotomy with approximation of the sternal fragments with interrupted stainless steel wire loops; this technique may be associated with an unacceptable incidence of postoperative sternal disruption causing chronic pain and deformity. Approximation of the sternal ends was achieved with peristernal cables that passed behind the sternum two intercostal spaces above and below the sternal division, which were then passed through metal sleeves in front of the sternum, the cables tensioned, and the sleeves then crimped. Forty-seven patients underwent sternal closure with this method, and satisfactory bone union occurred in all patients. Six patients underwent removal of the peristernal cables: 1 for infection (with satisfactory bone union after the removal of the cables), 3 for cosmetic reasons, 1 during the performance of a median sternotomy for an aortic valve replacement, and 1 in a patient who requested removal before commencing participation in football. This technique of peristernal cable approximation of sternal ends has successfully eliminated the problem of sternal disruption associated with this incision and is a useful alternative for preventing this complication after bilateral lung transplantation.
Attitude of the Saudi community towards heart donation, transplantation, and artificial hearts.
AlHabeeb, Waleed; AlAyoubi, Fakhr; Tash, Adel; AlAhmari, Leenah; AlHabib, Khalid F
2017-07-01
To understand the attitudes of the Saudi population towards heart donation and transplantation. Methods: A survey using a questionnaire addressing attitudes towards organ transplantation and donation was conducted across 18 cities in Saudi Arabia between September 2015 and March 2016. Results: A total of 1250 respondents participated in the survey. Of these, approximately 91% agree with the concept of organ transplantation but approximately 17% do not agree with the concept of heart transplantation; 42.4% of whom reject heart transplants for religious reasons. Only 43.6% of respondents expressed a willingness to donate their heart and approximately 58% would consent to the donation of a relative's organ after death. A total of 59.7% of respondents believe that organ donation is regulated and 31.8% fear that the doctors will not try hard enough to save their lives if they consent to organ donation. Approximately 77% believe the heart is removed while the donor is alive; although, the same proportion of respondents thought they knew what brain death meant. Conclusion: In general, the Saudi population seem to accept the concept of transplantation and are willing to donate, but still hold some reservations towards heart donation.
NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
Constraint reasoning in deep biomedical models.
Cruz, Jorge; Barahona, Pedro
2005-05-01
Deep biomedical models are often expressed by means of differential equations. Despite their expressive power, they are difficult to reason about and make decisions, given their non-linearity and the important effects that the uncertainty on data may cause. The objective of this work is to propose a constraint reasoning framework to support safe decisions based on deep biomedical models. The methods used in our approach include the generic constraint propagation techniques for reducing the bounds of uncertainty of the numerical variables complemented with new constraint reasoning techniques that we developed to handle differential equations. The results of our approach are illustrated in biomedical models for the diagnosis of diabetes, tuning of drug design and epidemiology where it was a valuable decision-supporting tool notwithstanding the uncertainty on data. The main conclusion that follows from the results is that, in biomedical decision support, constraint reasoning may be a worthwhile alternative to traditional simulation methods, especially when safe decisions are required.
The scenario-based generalization of radiation therapy margins.
Fredriksson, Albin; Bokrantz, Rasmus
2016-03-07
We give a scenario-based treatment plan optimization formulation that is equivalent to planning with geometric margins if the scenario doses are calculated using the static dose cloud approximation. If the scenario doses are instead calculated more accurately, then our formulation provides a novel robust planning method that overcomes many of the difficulties associated with previous scenario-based robust planning methods. In particular, our method protects only against uncertainties that can occur in practice, it gives a sharp dose fall-off outside high dose regions, and it avoids underdosage of the target in 'easy' scenarios. The method shares the benefits of the previous scenario-based robust planning methods over geometric margins for applications where the static dose cloud approximation is inaccurate, such as irradiation with few fields and irradiation with ion beams. These properties are demonstrated on a suite of phantom cases planned for treatment with scanned proton beams subject to systematic setup uncertainty.
Educational Applications of the Dialectic: Theory and Research.
ERIC Educational Resources Information Center
Slife, Brent D.
The field of education has largely ignored the concept of the dialectic, except in the Socratic teaching method, and even there bipolar meaning or reasoning has not been recognized. Mainstream educational psychology bases its assumptions about human reasoning and learning on current demonstrative concepts of information processing and levels of…
A Study of Moral Reasoning Development of Teacher Education Students in Northern Louisiana
ERIC Educational Resources Information Center
Wade, April Mitchell
2015-01-01
This quantitative descriptive study identified the differences in the moral reasoning development levels between undergraduate teacher education students enrolled in methods courses and graduate teacher education students enrolled in an alternative certification education program using the Defining Issues Test-2 instrument. Based on Kohlberg's…
Selten, Ellen M H; Geenen, Rinie; van der Laan, Willemijn H; van der Meulen-Dilling, Roelien G; Schers, Henk J; Nijhof, Marc W; van den Ende, Cornelia H M; Vriezekolk, Johanna E
2017-02-01
To improve patients' use of conservative treatment options of hip and knee OA, in-depth understanding of reasons underlying patients' treatment choices is required. The current study adopted a concept mapping method to thematically structure and prioritize reasons for treatment choice in knee and hip OA from a patients' perspective. Multiple reasons for treatment choices were previously identified using in-depth interviews. In consensus meetings, experts derived 51 representative reasons from the interviews. Thirty-six patients individually sorted the 51 reasons in two card-sorting tasks: one based on content similarity, and one based on importance of reasons. The individual sortings of the first card-sorting task provided input for a hierarchical cluster analysis (squared Euclidian distances, Ward's method). The importance of the reasons and clusters were examined using descriptive statistics. The hierarchical structure of reasons for treatment choices showed a core distinction between two categories of clusters: barriers [subdivided into context (e.g. the healthcare system) and disadvantages] and outcome (subdivided into treatment and personal life). At the lowest level, 15 clusters were identified of which the clusters Physical functioning, Risks and Prosthesis were considered most important when making a treatment decision for hip or knee OA. Patients' treatment choices in knee and hip OA are guided by contextual barriers, disadvantages of the treatment, outcomes of the treatment and consequences for personal life. The structured overview of reasons can be used to support shared decision-making. © The Author 2016. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Casuistry as bioethical method: an empirical perspective.
Braunack-Mayer, A
2001-07-01
This paper examines the role that casuistry, a model of bioethical reasoning revived by Jonsen and Toulmin, plays in ordinary moral reasoning. I address the question: 'What is the evidence for contemporary casuistry's claim that everyday moral reasoning is casuistic in nature?' The paper begins with a description of the casuistic method, and then reviews the empirical arguments Jonsen and Toulmin offer to show that every-day moral decision-making is casuistic. Finally, I present the results of qualitative research conducted with 15 general practitioners (GPs) in South Australia, focusing on the ways in which these GP participants used stories and anecdotes in their own moral reasoning. This research found that the GPs interviewed did use a form of casuistry when talking about ethical dilemmas. However, the GPs' homespun casuistry often lacked one central element of casuistic reasoning--clear paradigm cases on which to base comparisons. I conclude that casuistic reasoning does appear to play a role in every-day moral decision-making, but that it is a more subdued role than perhaps casuists would like.
Applicability and Limitations of Reliability Allocation Methods
NASA Technical Reports Server (NTRS)
Cruz, Jose A.
2016-01-01
Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.
ERIC Educational Resources Information Center
Daghan, Gökhan; Akkoyunlu, Buket
2014-01-01
In this study, Information Technologies teachers' views and usage cases on performance based assesment methods (PBAMs) are examined. It is aimed to find out which of the PBAMs are used frequently or not used, preference reasons of these methods and opinions about the applicability of them. Study is designed with the phenomenological design which…
NASA Astrophysics Data System (ADS)
Kruis, Nathanael J. F.
Heat transfer from building foundations varies significantly in all three spatial dimensions and has important dynamic effects at all timescales, from one hour to several years. With the additional consideration of moisture transport, ground freezing, evapotranspiration, and other physical phenomena, the estimation of foundation heat transfer becomes increasingly sophisticated and computationally intensive to the point where accuracy must be compromised for reasonable computation time. The tools currently available to calculate foundation heat transfer are often either too limited in their capabilities to draw meaningful conclusions or too sophisticated to use in common practices. This work presents Kiva, a new foundation heat transfer computational framework. Kiva provides a flexible environment for testing different numerical schemes, initialization methods, spatial and temporal discretizations, and geometric approximations. Comparisons within this framework provide insight into the balance of computation speed and accuracy relative to highly detailed reference solutions. The accuracy and computational performance of six finite difference numerical schemes are verified against established IEA BESTEST test cases for slab-on-grade heat conduction. Of the schemes tested, the Alternating Direction Implicit (ADI) scheme demonstrates the best balance between accuracy, performance, and numerical stability. Kiva features four approaches of initializing soil temperatures for an annual simulation. A new accelerated initialization approach is shown to significantly reduce the required years of presimulation. Methods of approximating three-dimensional heat transfer within a representative two-dimensional context further improve computational performance. A new approximation called the boundary layer adjustment method is shown to improve accuracy over other established methods with a negligible increase in computation time. This method accounts for the reduced heat transfer from concave foundation shapes, which has not been adequately addressed to date. Within the Kiva framework, three-dimensional heat transfer that can require several days to simulate is approximated in two-dimensions in a matter of seconds while maintaining a mean absolute deviation within 3%.
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
Time-Dependent Hartree-Fock Approach to Nuclear Pasta at Finite Temperature
NASA Astrophysics Data System (ADS)
Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.
2013-03-01
We present simulations of neutron-rich matter at subnuclear densities, like supernova matter, with the time-dependent Hartree-Fock approximation at temperatures of several MeV. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. This matter evolves into spherical, rod-like, and slab-like shapes and mixtures thereof. The simulations employ a full Skyrme interaction in a periodic three-dimensional grid. By an improved morphological analysis based on Minkowski functionals, all eight pasta shapes can be uniquely identified by the sign of only two valuations, namely the Euler characteristic and the integral mean curvature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Elman, Howard; Shuttleworth, Robert R.
2007-04-01
In recent years, considerable effort has been placed on developing efficient and robust solution algorithms for the incompressible Navier-Stokes equations based on preconditioned Krylov methods. These include physics-based methods, such as SIMPLE, and purely algebraic preconditioners based on the approximation of the Schur complement. All these techniques can be represented as approximate block factorization (ABF) type preconditioners. The goal is to decompose the application of the preconditioner into simplified sub-systems in which scalable multi-level type solvers can be applied. In this paper we develop a taxonomy of these ideas based on an adaptation of a generalized approximate factorization of themore » Navier-Stokes system first presented in [25]. This taxonomy illuminates the similarities and differences among these preconditioners and the central role played by efficient approximation of certain Schur complement operators. We then present a parallel computational study that examines the performance of these methods and compares them to an additive Schwarz domain decomposition (DD) algorithm. Results are presented for two and three-dimensional steady state problems for enclosed domains and inflow/outflow systems on both structured and unstructured meshes. The numerical experiments are performed using MPSalsa, a stabilized finite element code.« less
On the parallel solution of parabolic equations
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.
Nano-material and method of fabrication
Menchhofer, Paul A; Seals, Roland D; Howe, Jane Y; Wang, Wei
2015-02-03
A fluffy nano-material and method of manufacture are described. At 2000.times. magnification the fluffy nanomaterial has the appearance of raw, uncarded wool, with individual fiber lengths ranging from approximately four microns to twenty microns. Powder-based nanocatalysts are dispersed in the fluffy nanomaterial. The production of fluffy nanomaterial typically involves flowing about 125 cc/min of organic vapor at a pressure of about 400 torr over powder-based nano-catalysts for a period of time that may range from approximately thirty minutes to twenty-four hours.
Al-Khatib, Ra'ed M; Rashid, Nur'Aini Abdul; Abdullah, Rosni
2011-08-01
The secondary structure of RNA pseudoknots has been extensively inferred and scrutinized by computational approaches. Experimental methods for determining RNA structure are time consuming and tedious; therefore, predictive computational approaches are required. Predicting the most accurate and energy-stable pseudoknot RNA secondary structure has been proven to be an NP-hard problem. In this paper, a new RNA folding approach, termed MSeeker, is presented; it includes KnotSeeker (a heuristic method) and Mfold (a thermodynamic algorithm). The global optimization of this thermodynamic heuristic approach was further enhanced by using a case-based reasoning technique as a local optimization method. MSeeker is a proposed algorithm for predicting RNA pseudoknot structure from individual sequences, especially long ones. This research demonstrates that MSeeker improves the sensitivity and specificity of existing RNA pseudoknot structure predictions. The performance and structural results from this proposed method were evaluated against seven other state-of-the-art pseudoknot prediction methods. The MSeeker method had better sensitivity than the DotKnot, FlexStem, HotKnots, pknotsRG, ILM, NUPACK and pknotsRE methods, with 79% of the predicted pseudoknot base-pairs being correct.
Missing value imputation strategies for metabolomics data.
Armitage, Emily Grace; Godzien, Joanna; Alonso-Herranz, Vanesa; López-Gonzálvez, Ángeles; Barbas, Coral
2015-12-01
The origin of missing values can be caused by different reasons and depending on these origins missing values should be considered differently and dealt with in different ways. In this research, four methods of imputation have been compared with respect to revealing their effects on the normality and variance of data, on statistical significance and on the approximation of a suitable threshold to accept missing data as truly missing. Additionally, the effects of different strategies for controlling familywise error rate or false discovery and how they work with the different strategies for missing value imputation have been evaluated. Missing values were found to affect normality and variance of data and k-means nearest neighbour imputation was the best method tested for restoring this. Bonferroni correction was the best method for maximizing true positives and minimizing false positives and it was observed that as low as 40% missing data could be truly missing. The range between 40 and 70% missing values was defined as a "gray area" and therefore a strategy has been proposed that provides a balance between the optimal imputation strategy that was k-means nearest neighbor and the best approximation of positioning real zeros. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Convergence analysis of surrogate-based methods for Bayesian inverse problems
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Yuan-Xiang
2017-12-01
The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.
Case-based medical informatics
Pantazi, Stefan V; Arocha, José F; Moehr, Jochen R
2004-01-01
Background The "applied" nature distinguishes applied sciences from theoretical sciences. To emphasize this distinction, we begin with a general, meta-level overview of the scientific endeavor. We introduce the knowledge spectrum and four interconnected modalities of knowledge. In addition to the traditional differentiation between implicit and explicit knowledge we outline the concepts of general and individual knowledge. We connect general knowledge with the "frame problem," a fundamental issue of artificial intelligence, and individual knowledge with another important paradigm of artificial intelligence, case-based reasoning, a method of individual knowledge processing that aims at solving new problems based on the solutions to similar past problems. We outline the fundamental differences between Medical Informatics and theoretical sciences and propose that Medical Informatics research should advance individual knowledge processing (case-based reasoning) and that natural language processing research is an important step towards this goal that may have ethical implications for patient-centered health medicine. Discussion We focus on fundamental aspects of decision-making, which connect human expertise with individual knowledge processing. We continue with a knowledge spectrum perspective on biomedical knowledge and conclude that case-based reasoning is the paradigm that can advance towards personalized healthcare and that can enable the education of patients and providers. We center the discussion on formal methods of knowledge representation around the frame problem. We propose a context-dependent view on the notion of "meaning" and advocate the need for case-based reasoning research and natural language processing. In the context of memory based knowledge processing, pattern recognition, comparison and analogy-making, we conclude that while humans seem to naturally support the case-based reasoning paradigm (memory of past experiences of problem-solving and powerful case matching mechanisms), technical solutions are challenging. Finally, we discuss the major challenges for a technical solution: case record comprehensiveness, organization of information on similarity principles, development of pattern recognition and solving ethical issues. Summary Medical Informatics is an applied science that should be committed to advancing patient-centered medicine through individual knowledge processing. Case-based reasoning is the technical solution that enables a continuous individual knowledge processing and could be applied providing that challenges and ethical issues arising are addressed appropriately. PMID:15533257
Magnesium stearine production via direct reaction of palm stearine and magnesium hydroxide
NASA Astrophysics Data System (ADS)
Pratiwi, M.; Ylitervo, P.; Pettersson, A.; Prakoso, T.; Soerawidjaja, T. H.
2017-06-01
The fossil oil production could not compensate with the increase of its consumption, because of this reason the renewable alternative energy source is needed to meet this requirement of this fuel. One of the methods to produce hydrocarbon is by decarboxylation of fatty acids. Vegetable oil and fats are the greatest source of fatty acids, so these can be used as raw material for biohydrocarbon production. From other researchers on their past researchs, by heating base soap from divalent metal, those metal salts will decarboxylate and produce hydrocarbon. This study investigate the process and characterization of magnesium soaps from palm stearine by Blachford method. The metal soaps are synthesized by direct reaction of palm stearine and magnesium hydroxide to produce magnesium stearine and magnesium stearine base soaps at 140-180°C and 6-10 bar for 3-6 hours. The operation process which succeed to gain metal soaps is 180°C, 10 bar, for 3-6 hours. These metal soaps are then compared with commercial magnesium stearate. Based on Thermogravimetry Analysis (TGA) results, the decomposition temperature of all the metal soaps were 250°C. Scanning Electron Microscope with Energy Dispersive X-ray (SEM-EDX) analysis have shown the traces of sodium sulphate for magnesium stearate commercial and magnesium hydroxide for both type of magnesium stearine soaps. The analysis results from Microwave Plasma-Atomic Emission Spectrometry (MP-AES) have shown that the magnesium content of magnesium stearine approximate with magnesium stearate commercial and lower compare with magnesium stearine base soaps. These experiments suggest that the presented saponification process method could produced metal soaps comparable with the commercial metal soaps.
Approximate convective heating equations for hypersonic flows
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Moss, J. N.; Sutton, K.
1979-01-01
Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.
Scientific reasoning profile of junior secondary school students on the concept of static fluid
NASA Astrophysics Data System (ADS)
Mariana, N.; Siahaan, P.; Utari, S.
2018-05-01
Scientific reasoning is one of the most important ability. This study aims to determine the profile of scientific reasoning of junior high school students about the concept of static fluid. This research uses a descriptive method with a quantitative approach to get an idea about the scientific reasoning of One Roof Junior Secondary School Student Kotabaru Reteh in Riau. The technique of collecting data is done by test of scientific reasoning. Scientific reasoning capability refers to Furtak’s EBR (Evidence Based Reasoning) scientific reasoning indicator that contains the components of claims, data, evidence, and rules. The result obtained on each element of scientific reasoning is 35% claim, 23% data, 21% evidence and 17% rule. The conclusions of this research that scientific reasoning of Satu Atap Junior Secondary School student Kotabaru Reteh, Riau Province still in the low category.
Extending the Fellegi-Sunter probabilistic record linkage method for approximate field comparators.
DuVall, Scott L; Kerber, Richard A; Thomas, Alun
2010-02-01
Probabilistic record linkage is a method commonly used to determine whether demographic records refer to the same person. The Fellegi-Sunter method is a probabilistic approach that uses field weights based on log likelihood ratios to determine record similarity. This paper introduces an extension of the Fellegi-Sunter method that incorporates approximate field comparators in the calculation of field weights. The data warehouse of a large academic medical center was used as a case study. The approximate comparator extension was compared with the Fellegi-Sunter method in its ability to find duplicate records previously identified in the data warehouse using different demographic fields and matching cutoffs. The approximate comparator extension misclassified 25% fewer pairs and had a larger Welch's T statistic than the Fellegi-Sunter method for all field sets and matching cutoffs. The accuracy gain provided by the approximate comparator extension grew as less information was provided and as the matching cutoff increased. Given the ubiquity of linkage in both clinical and research settings, the incremental improvement of the extension has the potential to make a considerable impact.
NASA Astrophysics Data System (ADS)
Zittersteijn, Michiel; Schildknecht, Thomas; Vananti, Alessandro; Dolado Perez, Juan Carlos; Martinot, Vincent
2016-07-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention. This problem is also known as the Multiple Target Tracking (MTT) problem. The complexity of the MTT problem is defined by its dimension S. Current research tends to focus on the S = 2 MTT problem. The reason for this is that for S = 2 the problem has a P-complexity. However, with S = 2 the decision to associate a set of observations is based on the minimum amount of information, in ambiguous situations (e.g. satellite clusters) this will lead to incorrect associations. The S > 2 MTT problem is an NP-hard combinatorial optimization problem. In previous work an Elitist Genetic Algorithm (EGA) was proposed as a method to approximately solve this problem. It was shown that the EGA is able to find a good approximate solution with a polynomial time complexity. The EGA relies on solving the Lambert problem in order to perform the necessary orbit determinations. This means that the algorithm is restricted to orbits that are described by Keplerian motion. The work presented in this paper focuses on the impact that this restriction has on the algorithm performance.
A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction
Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R.; Buenrostro-Mariscal, Raymundo
2017-01-01
There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. PMID:28391241
A Variational Bayes Genomic-Enabled Prediction Model with Genotype × Environment Interaction.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Montesinos-López, José Cricelio; Luna-Vázquez, Francisco Javier; Salinas-Ruiz, Josafhat; Herrera-Morales, José R; Buenrostro-Mariscal, Raymundo
2017-06-07
There are Bayesian and non-Bayesian genomic models that take into account G×E interactions. However, the computational cost of implementing Bayesian models is high, and becomes almost impossible when the number of genotypes, environments, and traits is very large, while, in non-Bayesian models, there are often important and unsolved convergence problems. The variational Bayes method is popular in machine learning, and, by approximating the probability distributions through optimization, it tends to be faster than Markov Chain Monte Carlo methods. For this reason, in this paper, we propose a new genomic variational Bayes version of the Bayesian genomic model with G×E using half-t priors on each standard deviation (SD) term to guarantee highly noninformative and posterior inferences that are not sensitive to the choice of hyper-parameters. We show the complete theoretical derivation of the full conditional and the variational posterior distributions, and their implementations. We used eight experimental genomic maize and wheat data sets to illustrate the new proposed variational Bayes approximation, and compared its predictions and implementation time with a standard Bayesian genomic model with G×E. Results indicated that prediction accuracies are slightly higher in the standard Bayesian model with G×E than in its variational counterpart, but, in terms of computation time, the variational Bayes genomic model with G×E is, in general, 10 times faster than the conventional Bayesian genomic model with G×E. For this reason, the proposed model may be a useful tool for researchers who need to predict and select genotypes in several environments. Copyright © 2017 Montesinos-López et al.
Navarro, B; Daròs, J A; Flores, R
1996-01-01
Two PCR-based methods are described for obtaining clones of small circular RNAs of unknown sequence and for which only minute amounts are available. To avoid introducing any assumption about the RNA sequence, synthesis of the cDNAs is initiated with random primers. The cDNA population is then PCR-amplified using a primer whose sequence is present at both sides of the cDNAs, since they have been obtained with random hexamers and then a linker with the sequence of the PCR primer has been ligated to their termini, or because the cDNAs have been synthesized with an oligonucleotide that contains the sequence of the PCR primer at its 5' end and six randomized positions at its 3' end. The procedures need only approximately 50 ng of purified RNA template. The reasons for the emergence of cloning artifacts and precautions to avoid them are discussed.
Round-off errors in cutting plane algorithms based on the revised simplex procedure
NASA Technical Reports Server (NTRS)
Moore, J. E.
1973-01-01
This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.
Important Variation in Vibrational Properties of LiFePO4 and FePO4 Induced by Magnetism
Seifitokaldani, Ali; Gheribi, Aïmen E.; Phan, Anh Thu; Chartrand, Patrice; Dollé, Mickaël
2016-01-01
A new thermodynamically self-consistent (TSC) method, based on the quasi-harmonic approximation (QHA), is used to obtain the Debye temperatures of LiFePO4 (LFP) and FePO4 (FP) from available experimental specific heat capacities for a wide temperature range. The calculated Debye temperatures show an interesting critical and peculiar behavior so that a steep increase in the Debye temperatures is observed by increasing the temperature. This critical behavior is fitted by the critical function and the adjusted critical temperatures are very close to the magnetic phase transition temperatures in LFP and FP. Hence, the critical behavior of the Debye temperatures is correlated with the magnetic phase transitions in these compounds. Our first-principle calculations support our conjecture that the change in electronic structures, i.e. electron density of state and electron localization function, and consequently the change in thermophysical properties due to the magnetic transition may be the reason for the observation of this peculiar behavior of the Debye temperatures. PMID:27604551
Important Variation in Vibrational Properties of LiFePO4 and FePO4 Induced by Magnetism
NASA Astrophysics Data System (ADS)
Seifitokaldani, Ali; Gheribi, Aïmen E.; Phan, Anh Thu; Chartrand, Patrice; Dollé, Mickaël
2016-09-01
A new thermodynamically self-consistent (TSC) method, based on the quasi-harmonic approximation (QHA), is used to obtain the Debye temperatures of LiFePO4 (LFP) and FePO4 (FP) from available experimental specific heat capacities for a wide temperature range. The calculated Debye temperatures show an interesting critical and peculiar behavior so that a steep increase in the Debye temperatures is observed by increasing the temperature. This critical behavior is fitted by the critical function and the adjusted critical temperatures are very close to the magnetic phase transition temperatures in LFP and FP. Hence, the critical behavior of the Debye temperatures is correlated with the magnetic phase transitions in these compounds. Our first-principle calculations support our conjecture that the change in electronic structures, i.e. electron density of state and electron localization function, and consequently the change in thermophysical properties due to the magnetic transition may be the reason for the observation of this peculiar behavior of the Debye temperatures.
Important Variation in Vibrational Properties of LiFePO4 and FePO4 Induced by Magnetism.
Seifitokaldani, Ali; Gheribi, Aïmen E; Phan, Anh Thu; Chartrand, Patrice; Dollé, Mickaël
2016-09-08
A new thermodynamically self-consistent (TSC) method, based on the quasi-harmonic approximation (QHA), is used to obtain the Debye temperatures of LiFePO4 (LFP) and FePO4 (FP) from available experimental specific heat capacities for a wide temperature range. The calculated Debye temperatures show an interesting critical and peculiar behavior so that a steep increase in the Debye temperatures is observed by increasing the temperature. This critical behavior is fitted by the critical function and the adjusted critical temperatures are very close to the magnetic phase transition temperatures in LFP and FP. Hence, the critical behavior of the Debye temperatures is correlated with the magnetic phase transitions in these compounds. Our first-principle calculations support our conjecture that the change in electronic structures, i.e. electron density of state and electron localization function, and consequently the change in thermophysical properties due to the magnetic transition may be the reason for the observation of this peculiar behavior of the Debye temperatures.
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.
Gilra, Aditya; Gerstner, Wulfram
2017-11-27
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.
Robustness of survival estimates for radio-marked animals
Bunck, C.M.; Chen, C.-L.
1992-01-01
Telemetry techniques are often used to study the survival of birds and mammals; particularly whcn mark-recapture approaches are unsuitable. Both parametric and nonparametric methods to estimate survival have becn developed or modified from other applications. An implicit assumption in these approaches is that the probability of re-locating an animal with a functioning transmitter is one. A Monte Carlo study was conducted to determine the bias and variance of the Kaplan-Meier estimator and an estimator based also on the assumption of constant hazard and to eva!uate the performance of the two-sample tests associated with each. Modifications of each estimator which allow a re-Iocation probability of less than one are described and evaluated. Generallv the unmodified estimators were biased but had lower variance. At low sample sizes all estimators performed poorly. Under the null hypothesis, the distribution of all test statistics reasonably approximated the null distribution when survival was low but not when it was high. The power of the two-sample tests were similar.
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network
Gerstner, Wulfram
2017-01-01
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically. PMID:29173280
Errors Using Observational Methods for Ergonomics Assessment in Real Practice.
Diego-Mas, Jose-Antonio; Alcaide-Marzal, Jorge; Poveda-Bautista, Rocio
2017-12-01
The degree in which practitioners use the observational methods for musculoskeletal disorder risks assessment correctly was evaluated. Ergonomics assessment is a key issue for the prevention and reduction of work-related musculoskeletal disorders in workplaces. Observational assessment methods appear to be better matched to the needs of practitioners than direct measurement methods, and for this reason, they are the most widely used techniques in real work situations. Despite the simplicity of observational methods, those responsible for assessing risks using these techniques should have some experience and know-how in order to be able to use them correctly. We analyzed 442 risk assessments of actual jobs carried out by 290 professionals from 20 countries to determine their reliability. The results show that approximately 30% of the assessments performed by practitioners had errors. In 13% of the assessments, the errors were severe and completely invalidated the results of the evaluation. Despite the simplicity of observational method, approximately 1 out of 3 assessments conducted by practitioners in actual work situations do not adequately evaluate the level of potential musculoskeletal disorder risks. This study reveals a problem that suggests greater effort is needed to ensure that practitioners possess better knowledge of the techniques used to assess work-related musculoskeletal disorder risks and that laws and regulations should be stricter as regards qualifications and skills required by professionals.
NASA Astrophysics Data System (ADS)
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.