Sample records for approximate reasoning based

  1. On the integration of reinforcement learning and approximate reasoning for control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1991-01-01

    The author discusses the importance of strengthening the knowledge representation characteristic of reinforcement learning techniques using methods such as approximate reasoning. The ARIC (approximate reasoning-based intelligent control) architecture is an example of such a hybrid approach in which the fuzzy control rules are modified (fine-tuned) using reinforcement learning. ARIC also demonstrates that it is possible to start with an approximately correct control knowledge base and learn to refine this knowledge through further experience. On the other hand, techniques such as the TD (temporal difference) algorithm and Q-learning establish stronger theoretical foundations for their use in adaptive control and also in stability analysis of hybrid reinforcement learning and approximate reasoning-based controllers.

  2. Artificial neural networks and approximate reasoning for intelligent control in space

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1991-01-01

    A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.

  3. LinguisticBelief: a java application for linguistic evaluation using belief, fuzzy sets, and approximate reasoning.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    LinguisticBelief is a Java computer code that evaluates combinations of linguistic variables using an approximate reasoning rule base. Each variable is comprised of fuzzy sets, and a rule base describes the reasoning on combinations of variables fuzzy sets. Uncertainty is considered and propagated through the rule base using the belief/plausibility measure. The mathematics of fuzzy sets, approximate reasoning, and belief/ plausibility are complex. Without an automated tool, this complexity precludes their application to all but the simplest of problems. LinguisticBelief automates the use of these techniques, allowing complex problems to be evaluated easily. LinguisticBelief can be used free of chargemore » on any Windows XP machine. This report documents the use and structure of the LinguisticBelief code, and the deployment package for installation client machines.« less

  4. Approximate reasoning using terminological models

    NASA Technical Reports Server (NTRS)

    Yen, John; Vaidya, Nitin

    1992-01-01

    Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.

  5. Toward Webscale, Rule-Based Inference on the Semantic Web Via Data Parallelism

    DTIC Science & Technology

    2013-02-01

    Another work distinct from its peers is the work on approximate reasoning by Rudolph et al. [34] in which multiple inference sys- tems were combined not...Workshop Scalable Semantic Web Knowledge Base Systems, 2010, pp. 17–31. [34] S. Rudolph , T. Tserendorj, and P. Hitzler, “What is approximate reasoning...2013] [55] M. Duerst and M. Suignard. (2005, Jan .). RFC 3987 – internationalized resource identifiers (IRIs). IETF. [Online]. Available: http

  6. An experiment-based comparative study of fuzzy logic control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Chen, Yung-Yaw; Lee, Chuen-Chein; Murugesan, S.; Jang, Jyh-Shing

    1989-01-01

    An approach is presented to the control of a dynamic physical system through the use of approximate reasoning. The approach has been implemented in a program named POLE, and the authors have successfully built a prototype hardware system to solve the cartpole balancing problem in real-time. The approach provides a complementary alternative to the conventional analytical control methodology and is of substantial use when a precise mathematical model of the process being controlled is not available. A set of criteria for comparing controllers based on approximate reasoning and those based on conventional control schemes is furnished.

  7. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

  8. Development of Probabilistic and Possebilistic Approaches to Approximate Reasoning and Its Applications

    DTIC Science & Technology

    1989-10-31

    fo tmaa OmfuogeM ara Mmi. fal in fM?05V~ ~ ~ ~ ~ ~ A D A 2 4 0409"~ n ugt Psoo,@’ oducbof Proton (07044 136M. WagaWapN. DC 20141 T1 3. REPORT TYPE...Al (circumscription, non- monotonic reasoning, and default reasoning), our approach is based on fuzzy logic and, more specifically, on the theory of

  9. Fuzzy Logic for Incidence Geometry

    PubMed Central

    2016-01-01

    The paper presents a mathematical framework for approximate geometric reasoning with extended objects in the context of Geography, in which all entities and their relationships are described by human language. These entities could be labelled by commonly used names of landmarks, water areas, and so forth. Unlike single points that are given in Cartesian coordinates, these geographic entities are extended in space and often loosely defined, but people easily perform spatial reasoning with extended geographic objects “as if they were points.” Unfortunately, up to date, geographic information systems (GIS) miss the capability of geometric reasoning with extended objects. The aim of the paper is to present a mathematical apparatus for approximate geometric reasoning with extended objects that is usable in GIS. In the paper we discuss the fuzzy logic (Aliev and Tserkovny, 2011) as a reasoning system for geometry of extended objects, as well as a basis for fuzzification of the axioms of incidence geometry. The same fuzzy logic was used for fuzzification of Euclid's first postulate. Fuzzy equivalence relation “extended lines sameness” is introduced. For its approximation we also utilize a fuzzy conditional inference, which is based on proposed fuzzy “degree of indiscernibility” and “discernibility measure” of extended points. PMID:27689133

  10. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

  11. Approximate reasoning-based learning and control for proximity operations and docking in space

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Jani, Yashvant; Lea, Robert N.

    1991-01-01

    A recently proposed hybrid-neutral-network and fuzzy-logic-control architecture is applied to a fuzzy logic controller developed for attitude control of the Space Shuttle. A model using reinforcement learning and learning from past experience for fine-tuning its knowledge base is proposed. Two main components of this approximate reasoning-based intelligent control (ARIC) model - an action-state evaluation network and action selection network are described as well as the Space Shuttle attitude controller. An ARIC model for the controller is presented, and it is noted that the input layer in each network includes three nodes representing the angle error, angle error rate, and bias node. Preliminary results indicate that the controller can hold the pitch rate within its desired deadband and starts to use the jets at about 500 sec in the run.

  12. Application of plausible reasoning to AI-based control systems

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid; Lum, Henry, Jr.

    1987-01-01

    Some current approaches to plausible reasoning in artificial intelligence are reviewed and discussed. Some of the most significant recent advances in plausible and approximate reasoning are examined. A synergism among the techniques of uncertainty management is advocated, and brief discussions on the certainty factor approach, probabilistic approach, Dempster-Shafer theory of evidence, possibility theory, linguistic variables, and fuzzy control are presented. Some extensions to these methods are described, and the applications of the methods are considered.

  13. Microwave Passive Ground-Based Retrievals of Cloud and Rain Liquid Water Path in Drizzling Clouds: Challenges and Possibilities

    DOE PAGES

    Cadeddu, Maria P.; Marchand, Roger; Orlandi, Emiliano; ...

    2017-08-11

    Satellite and ground-based microwave radiometers are routinely used for the retrieval of liquid water path (LWP) under all atmospheric conditions. The retrieval of water vapor and LWP from ground-based radiometers during rain has proved to be a difficult challenge for two principal reasons: the inadequacy of the nonscattering approximation in precipitating clouds and the deposition of rain drops on the instrument's radome. In this paper, we combine model computations and real ground-based, zenith-viewing passive microwave radiometer brightness temperature measurements to investigate how total, cloud, and rain LWP retrievals are affected by assumptions on the cloud drop size distribution (DSD) andmore » under which conditions a nonscattering approximation can be considered reasonably accurate. Results show that until the drop effective diameter is larger than similar to 200 mu m, a nonscattering approximation yields results that are still accurate at frequencies less than 90 GHz. For larger drop sizes, it is shown that higher microwave frequencies contain useful information that can be used to separate cloud and rain LWP provided that the vertical distribution of hydrometeors, as well as the DSD, is reasonably known. The choice of the DSD parameters becomes important to ensure retrievals that are consistent with the measurements. A physical retrieval is tested on a synthetic data set and is then used to retrieve total, cloud, and rain LWP from radiometric measurements during two drizzling cases at the atmospheric radiation measurement Eastern North Atlantic site.« less

  14. Microwave Passive Ground-Based Retrievals of Cloud and Rain Liquid Water Path in Drizzling Clouds: Challenges and Possibilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadeddu, Maria P.; Marchand, Roger; Orlandi, Emiliano

    Satellite and ground-based microwave radiometers are routinely used for the retrieval of liquid water path (LWP) under all atmospheric conditions. The retrieval of water vapor and LWP from ground-based radiometers during rain has proved to be a difficult challenge for two principal reasons: the inadequacy of the nonscattering approximation in precipitating clouds and the deposition of rain drops on the instrument's radome. In this paper, we combine model computations and real ground-based, zenith-viewing passive microwave radiometer brightness temperature measurements to investigate how total, cloud, and rain LWP retrievals are affected by assumptions on the cloud drop size distribution (DSD) andmore » under which conditions a nonscattering approximation can be considered reasonably accurate. Results show that until the drop effective diameter is larger than similar to 200 mu m, a nonscattering approximation yields results that are still accurate at frequencies less than 90 GHz. For larger drop sizes, it is shown that higher microwave frequencies contain useful information that can be used to separate cloud and rain LWP provided that the vertical distribution of hydrometeors, as well as the DSD, is reasonably known. The choice of the DSD parameters becomes important to ensure retrievals that are consistent with the measurements. A physical retrieval is tested on a synthetic data set and is then used to retrieve total, cloud, and rain LWP from radiometric measurements during two drizzling cases at the atmospheric radiation measurement Eastern North Atlantic site.« less

  15. Module Extraction for Efficient Object Queries over Ontologies with Large ABoxes

    PubMed Central

    Xu, Jia; Shironoshita, Patrick; Visser, Ubbo; John, Nigel; Kabuka, Mansur

    2015-01-01

    The extraction of logically-independent fragments out of an ontology ABox can be useful for solving the tractability problem of querying ontologies with large ABoxes. In this paper, we propose a formal definition of an ABox module, such that it guarantees complete preservation of facts about a given set of individuals, and thus can be reasoned independently w.r.t. the ontology TBox. With ABox modules of this type, isolated or distributed (parallel) ABox reasoning becomes feasible, and more efficient data retrieval from ontology ABoxes can be attained. To compute such an ABox module, we present a theoretical approach and also an approximation for SHIQ ontologies. Evaluation of the module approximation on different types of ontologies shows that, on average, extracted ABox modules are significantly smaller than the entire ABox, and the time for ontology reasoning based on ABox modules can be improved significantly. PMID:26848490

  16. Advanced Methods of Approximate Reasoning

    DTIC Science & Technology

    1990-11-30

    about Knowledge and Action. Technical Note 191, Menlo Park, California: SRI International. 1980 . 20 [26] N.J. Nilsson. Probabilistic logic. Artificial...reasoning. Artificial Intelligence, 13:81-132, 1980 . S[30 R. Reiter. On close world data bases. In H. Gallaire and J. Minker, editors, Logic and Data...specially grateful to Dr. Abraham Waksman of the Air Force Office of Scientific Research and Dr. David Hislop of the Army Research Office for their

  17. Using new aggregation operators in rule-based intelligent control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Chen, Yung-Yaw; Yager, Ronald R.

    1990-01-01

    A new aggregation operator is applied in the design of an approximate reasoning-based controller. The ordered weighted averaging (OWA) operator has the property of lying between the And function and the Or function used in previous fuzzy set reasoning systems. It is shown here that, by applying OWA operators, more generalized types of control rules, which may include linguistic quantifiers such as Many and Most, can be developed. The new aggregation operators, as tested in a cart-pole balancing control problem, illustrate improved performance when compared with existing fuzzy control aggregation schemes.

  18. Fuzziness In Approximate And Common-Sense Reasoning In Knowledge-Based Robotics Systems

    NASA Astrophysics Data System (ADS)

    Dodds, David R.

    1987-10-01

    Fuzzy functions, a major key to inexact reasoning, are described as they are applied to the fuzzification of robot co-ordinate systems. Linguistic-variables, a means of labelling ranges in fuzzy sets, are used as computationally pragmatic means of representing spatialization metaphors, themselves an extraordinarily rich basis for understanding concepts in orientational terms. Complex plans may be abstracted and simplified in a system which promotes conceptual planning by means of the orientational representation.

  19. Using fuzzy logic to integrate neural networks and knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Yen, John

    1991-01-01

    Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.

  20. Numerical Boundary Conditions for Specular Reflection in a Level-Sets-Based Wavefront Propagation Method

    DTIC Science & Technology

    2012-12-01

    acoustics One begins with Eikonal equation for the acoustic phase function S(t,x) as derived from the geometric acoustics (high frequency) approximation to...zb(x) is smooth and reasonably approximated as piecewise linear. The time domain ray (characteristic) equations for the Eikonal equation are ẋ(t)= c...travel time is affected, which is more physically relevant than global error in φ since it provides the phase information for the Eikonal equation (2.1

  1. School-University Partnerships in Action: Concepts, Cases,

    ERIC Educational Resources Information Center

    Sirotnik, Kenneth A., Ed.; Goodlad, John I., Ed.

    A general paradigm for ideal collaboration between schools and universities is proposed. It is based on a mutually collaborative arrangement between equal partners working together to meet self-interests while solving common problems. It is suggested that reasonable approximations to this ideal have great potential to effect significant…

  2. Galerkin approximation for inverse problems for nonautonomous nonlinear distributed systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Reich, Simeon; Rosen, I. G.

    1988-01-01

    An abstract framework and convergence theory is developed for Galerkin approximation for inverse problems involving the identification of nonautonomous nonlinear distributed parameter systems. A set of relatively easily verified conditions is provided which are sufficient to guarantee the existence of optimal solutions and their approximation by a sequence of solutions to a sequence of approximating finite dimensional identification problems. The approach is based on the theory of monotone operators in Banach spaces and is applicable to a reasonably broad class of nonlinear distributed systems. Operator theoretic and variational techniques are used to establish a fundamental convergence result. An example involving evolution systems with dynamics described by nonstationary quasilinear elliptic operators along with some applications are presented and discussed.

  3. AVCS Simulator Test Plan and Design Guide

    NASA Technical Reports Server (NTRS)

    Shelden, Stephen

    2001-01-01

    Internal document for communication of AVCS direction and documentation of simulator functionality. Discusses methods for AVCS simulation evaluation of pilot functions, implementation strategy of varying functional representation of pilot tasks (by instantiations of a base AVCS to reasonably approximate the interface of various vehicles -- e.g. Altair, GlobalHawk, etc.).

  4. BRYNTRN: A baryon transport computer code, computation procedures and data base

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Chun, Sang Y.; Buck, Warren W.; Khan, Ferdous; Cucinotta, Frank

    1988-01-01

    The development is described of an interaction data base and a numerical solution to the transport of baryons through the arbitrary shield material based on a straight ahead approximation of the Boltzmann equation. The code is most accurate for continuous energy boundary values but gives reasonable results for discrete spectra at the boundary with even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O).

  5. Proposal for a Joint NASA/KSAT Ka-band RF Propagation Terminal at Svalbard, Norway

    NASA Technical Reports Server (NTRS)

    Volosin, Jeffrey; Acosta, Roberto; Nessel, James; McCarthy, Kevin; Caroglanian, Armen

    2010-01-01

    This slide presentation discusses the placement of a Ka-band RF Propagation Terminal at Svalbard, Norway. The Near Earth Network (NEN) station would be managed by Kongsberg Satellite Services (KSAT) and would benefit NASA and KSAT. There are details of the proposed NASA/KSAT campaign, and the responsibilities each would agree to. There are several reasons for the placement, a primary reason is comparison with the Alaska site, Based on climatological similarities/differences with Alaska, Svalbard site expected to have good radiometer/beacon agreement approximately 99% of time.

  6. Detection of Natural Fractures from Observed Surface Seismic Data Based on a Linear-Slip Model

    NASA Astrophysics Data System (ADS)

    Chen, Huaizhen; Zhang, Guangzhi

    2018-03-01

    Natural fractures play an important role in migration of hydrocarbon fluids. Based on a rock physics effective model, the linear-slip model, which defines fracture parameters (fracture compliances) for quantitatively characterizing the effects of fractures on rock total compliance, we propose a method to detect natural fractures from observed seismic data via inversion for the fracture compliances. We first derive an approximate PP-wave reflection coefficient in terms of fracture compliances. Using the approximate reflection coefficient, we derive azimuthal elastic impedance as a function of fracture compliances. An inversion method to estimate fracture compliances from seismic data is presented based on a Bayesian framework and azimuthal elastic impedance, which is implemented in a two-step procedure: a least-squares inversion for azimuthal elastic impedance and an iterative inversion for fracture compliances. We apply the inversion method to synthetic and real data to verify its stability and reasonability. Synthetic tests confirm that the method can make a stable estimation of fracture compliances in the case of seismic data containing a moderate signal-to-noise ratio for Gaussian noise, and the test on real data reveals that reasonable fracture compliances are obtained using the proposed method.

  7. Comment on “On the quantum theory of molecules” [J. Chem. Phys. 137, 22A544 (2012)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sutcliffe, Brian T., E-mail: bsutclif@ulb.ac.be; Woolley, R. Guy

    2014-01-21

    In our previous paper [B. T. Sutcliffe and R. G. Woolley, J. Chem. Phys. 137, 22A544 (2012)] we argued that the Born-Oppenheimer approximation could not be based on an exact transformation of the molecular Schrödinger equation. In this Comment we suggest that the fundamental reason for the approximate nature of the Born-Oppenheimer model is the lack of a complete set of functions for the electronic space, and the need to describe the continuous spectrum using spectral projection.

  8. Mississippi Labor Mobility Demonstration Project--Relocating the Unemployed: Dimensions of Success.

    ERIC Educational Resources Information Center

    Speight, John F.; And Others

    The document provides an analysis of relocation stability of individuals relocated during the March, 1970-November, 1971 contract period. Data bases were 1,244 applicants with screening information and 401 individuals with follow-up interview information. Approximately one half were in new areas six months after being relocated. Reasons for…

  9. Major Accidents (Gray Swans) Likelihood Modeling Using Accident Precursors and Approximate Reasoning.

    PubMed

    Khakzad, Nima; Khan, Faisal; Amyotte, Paul

    2015-07-01

    Compared to the remarkable progress in risk analysis of normal accidents, the risk analysis of major accidents has not been so well-established, partly due to the complexity of such accidents and partly due to low probabilities involved. The issue of low probabilities normally arises from the scarcity of major accidents' relevant data since such accidents are few and far between. In this work, knowing that major accidents are frequently preceded by accident precursors, a novel precursor-based methodology has been developed for likelihood modeling of major accidents in critical infrastructures based on a unique combination of accident precursor data, information theory, and approximate reasoning. For this purpose, we have introduced an innovative application of information analysis to identify the most informative near accident of a major accident. The observed data of the near accident were then used to establish predictive scenarios to foresee the occurrence of the major accident. We verified the methodology using offshore blowouts in the Gulf of Mexico, and then demonstrated its application to dam breaches in the United Sates. © 2015 Society for Risk Analysis.

  10. Replication and Pedagogy in the History of Psychology VI: Egon Brunswik on Perception and Explicit Reasoning

    NASA Astrophysics Data System (ADS)

    Athy, Jeremy; Friedrich, Jeff; Delany, Eileen

    2008-05-01

    Egon Brunswik (1903 1955) first made an interesting distinction between perception and explicit reasoning, arguing that perception included quick estimates of an object’s size, nearly always resulting in good approximations in uncertain environments, whereas explicit reasoning, while better at achieving exact estimates, could often fail by wide margins. An experiment conducted by Brunswik to investigate these ideas was never published and the only available information is a figure of the results presented in a posthumous book in 1956. We replicated and extended his study to gain insight into the procedures Brunswik used in obtaining his results. Explicit reasoning resulted in fewer errors, yet more extreme ones than perception. Brunswik’s graphical analysis of the results led to different conclusions, however, than did a modern statistically-based analysis.

  11. Longitudinal studies of botulinum toxin in cervical dystonia: Why do patients discontinue therapy?

    PubMed

    Jinnah, H A; Comella, Cynthia L; Perlmutter, Joel; Lungu, Codrin; Hallett, Mark

    2018-06-01

    Numerous studies have established botulinum toxin (BoNT) to be safe and effective for the treatment of cervical dystonia (CD). Despite its well-documented efficacy, there has been growing awareness that a significant proportion of CD patients discontinue therapy. The reasons for discontinuation are only partly understood. This summary describes longitudinal studies that provided information regarding the proportions of patients discontinuing BoNT therapy, and the reasons for discontinuing therapy. The data come predominantly from un-blinded long-term follow-up studies, registry studies, and patient-based surveys. All types of longitudinal studies provide strong evidence that BoNT is both safe and effective in the treatment of CD for many years. Overall, approximately one third of CD patients discontinue BoNT. The most common reason for discontinuing therapy is lack of benefit, often described as primary or secondary non-response. The apparent lack of response is only rarely related to true immune-mediated resistance to BoNT. Other reasons for discontinuing include side effects, inconvenience, cost, or other reasons. Although BoNT is safe and effective in the treatment of the majority of patients with CD, approximately one third discontinue. The increasing awareness of a significant proportion of patients who discontinue should encourage further efforts to optimize administration of BoNT, to improve BoNT preparations to extend duration or reduce side effects, to develop add-on therapies that may mitigate swings in symptom severity, or develop entirely novel treatment approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers.

    PubMed

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation.

  13. Voluntary Withdrawal: Why Don't They Return?

    ERIC Educational Resources Information Center

    Ironside, Ellen M.

    Factors that influence voluntary withdrawal from the University of North Carolina at Chapel Hill are investigated. A survey based on a cohort of students admitted for the first time in fall 1977 was conducted with a response rate of approximately 50 percent. Major and minor reasons for not returning to the university are tabulated for males and…

  14. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    NASA Astrophysics Data System (ADS)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  15. Sound scattering by several zooplankton groups. II. Scattering models.

    PubMed

    Stanton, T K; Chu, D; Wiebe, P H

    1998-01-01

    Mathematical scattering models are derived and compared with data from zooplankton from several gross anatomical groups--fluidlike, elastic shelled, and gas bearing. The models are based upon the acoustically inferred boundary conditions determined from laboratory backscattering data presented in part I of this series [Stanton et al., J. Acoust. Soc. Am. 103, 225-235 (1998)]. The models use a combination of ray theory, modal-series solution, and distorted wave Born approximation (DWBA). The formulations, which are inherently approximate, are designed to include only the dominant scattering mechanisms as determined from the experiments. The models for the fluidlike animals (euphausiids in this case) ranged from the simplest case involving two rays, which could qualitatively describe the structure of target strength versus frequency for single pings, to the most complex case involving a rough inhomogeneous asymmetrically tapered bent cylinder using the DWBA-based formulation which could predict echo levels over all angles of incidence (including the difficult region of end-on incidence). The model for the elastic shelled body (gastropods in this case) involved development of an analytical model which takes into account irregularities and discontinuities of the shell. The model for gas-bearing animals (siphonophores) is a hybrid model which is composed of the summation of the exact solution to the gas sphere and the approximate DWBA-based formulation for arbitrarily shaped fluidlike bodies. There is also a simplified ray-based model for the siphonophore. The models are applied to data involving single pings, ping-to-ping variability, and echoes averaged over many pings. There is reasonable qualitative agreement between the predictions and single ping data, and reasonable quantitative agreement between the predictions and variability and averages of echo data.

  16. Stable same-sex friendships with higher achieving partners promote mathematical reasoning in lower achieving primary school children.

    PubMed

    DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik

    2015-11-01

    This study was designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and 1 year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal actor-partner interdependence model) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. © 2015 The British Psychological Society.

  17. Stable Same-Sex Friendships with Higher Achieving Partners Promote Mathematical Reasoning in Lower Achieving Primary School Children

    PubMed Central

    DeLay, Dawn; Laursen, Brett; Kiuru, Noona; Poikkeus, Anna-Maija; Aunola, Kaisa; Nurmi, Jari-Erik

    2015-01-01

    This study is designed to investigate friend influence over mathematical reasoning in a sample of 374 children in 187 same-sex friend dyads (184 girls in 92 friendships; 190 boys in 95 friendships). Participants completed surveys that measured mathematical reasoning in the 3rd grade (approximately 9 years old) and one year later in the 4th grade (approximately 10 years old). Analyses designed for dyadic data (i.e., longitudinal Actor-Partner Interdependence Models) indicated that higher achieving friends influenced the mathematical reasoning of lower achieving friends, but not the reverse. Specifically, greater initial levels of mathematical reasoning among higher achieving partners in the 3rd grade predicted greater increases in mathematical reasoning from 3rd grade to 4th grade among lower achieving partners. These effects held after controlling for peer acceptance and rejection, task avoidance, interest in mathematics, maternal support for homework, parental education, length of the friendship, and friendship group norms on mathematical reasoning. PMID:26402901

  18. Taking stock of medication wastage: Unused medications in US households.

    PubMed

    Law, Anandi V; Sakharkar, Prashant; Zargarzadeh, Amir; Tai, Bik Wai Bilvick; Hess, Karl; Hata, Micah; Mireles, Rudolph; Ha, Carolyn; Park, Tony J

    2015-01-01

    Despite the potential deleterious impact on patient safety, environmental safety and health care expenditures, the extent of unused prescription medications in US households and reasons for nonuse remain unknown. To estimate the extent, type and cost of unused medications and the reasons for their nonuse among US households. A cross-sectional, observational two-phased study was conducted using a convenience sample in Southern California. A web-based survey (Phase I, n = 238) at one health sciences institution and paper-based survey (Phase II, n = 68) at planned drug take-back events at three community pharmacies were conducted. The extent, type, and cost of unused medications and the reasons for their nonuse were collected. Approximately 2 of 3 prescription medications were reported unused; disease/condition improved (42.4%), forgetfulness (5.8%) and side effects (6.5%) were reasons cited for their nonuse. "Throwing medications in the trash" was found being the common method of disposal (63%). In phase I, pain medications (23.3%) and antibiotics (18%) were most commonly reported as unused, whereas in Phase II, 17% of medications for chronic conditions (hypertension, diabetes, cholesterol, heart disease) and 8.3% for mental health problems were commonly reported as unused. Phase II participants indicated pharmacy as a preferred location for drug disposal. The total estimated cost for unused medications was approximately $59,264.20 (average retail Rx price) to $152,014.89 (AWP) from both phases, borne largely by private health insurance. When extrapolated to a national level, it was approximately $2.4B for elderly taking five prescription medications to $5.4B for the 52% of US adults who take one prescription medication daily. Two out of three dispensed medications were unused, with national projected costs ranging from $2.4B to $5.4B. This wastage raises concerns about adherence, cost and safety; additionally, it points to the need for public awareness and policy to reduce wastage. Pharmacists can play an important role by educating patients both on appropriate medication use and disposal. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Reasoning with Vectors: A Continuous Model for Fast Robust Inference.

    PubMed

    Widdows, Dominic; Cohen, Trevor

    2015-10-01

    This paper describes the use of continuous vector space models for reasoning with a formal knowledge base. The practical significance of these models is that they support fast, approximate but robust inference and hypothesis generation, which is complementary to the slow, exact, but sometimes brittle behavior of more traditional deduction engines such as theorem provers. The paper explains the way logical connectives can be used in semantic vector models, and summarizes the development of Predication-based Semantic Indexing, which involves the use of Vector Symbolic Architectures to represent the concepts and relationships from a knowledge base of subject-predicate-object triples. Experiments show that the use of continuous models for formal reasoning is not only possible, but already demonstrably effective for some recognized informatics tasks, and showing promise in other traditional problem areas. Examples described in this paper include: predicting new uses for existing drugs in biomedical informatics; removing unwanted meanings from search results in information retrieval and concept navigation; type-inference from attributes; comparing words based on their orthography; and representing tabular data, including modelling numerical values. The algorithms and techniques described in this paper are all publicly released and freely available in the Semantic Vectors open-source software package.

  20. Reasoning with Vectors: A Continuous Model for Fast Robust Inference

    PubMed Central

    Widdows, Dominic; Cohen, Trevor

    2015-01-01

    This paper describes the use of continuous vector space models for reasoning with a formal knowledge base. The practical significance of these models is that they support fast, approximate but robust inference and hypothesis generation, which is complementary to the slow, exact, but sometimes brittle behavior of more traditional deduction engines such as theorem provers. The paper explains the way logical connectives can be used in semantic vector models, and summarizes the development of Predication-based Semantic Indexing, which involves the use of Vector Symbolic Architectures to represent the concepts and relationships from a knowledge base of subject-predicate-object triples. Experiments show that the use of continuous models for formal reasoning is not only possible, but already demonstrably effective for some recognized informatics tasks, and showing promise in other traditional problem areas. Examples described in this paper include: predicting new uses for existing drugs in biomedical informatics; removing unwanted meanings from search results in information retrieval and concept navigation; type-inference from attributes; comparing words based on their orthography; and representing tabular data, including modelling numerical values. The algorithms and techniques described in this paper are all publicly released and freely available in the Semantic Vectors open-source software package.1 PMID:26582967

  1. Uncertainty management by relaxation of conflicting constraints in production process scheduling

    NASA Technical Reports Server (NTRS)

    Dorn, Juergen; Slany, Wolfgang; Stary, Christian

    1992-01-01

    Mathematical-analytical methods as used in Operations Research approaches are often insufficient for scheduling problems. This is due to three reasons: the combinatorial complexity of the search space, conflicting objectives for production optimization, and the uncertainty in the production process. Knowledge-based techniques, especially approximate reasoning and constraint relaxation, are promising ways to overcome these problems. A case study from an industrial CIM environment, namely high-grade steel production, is presented to demonstrate how knowledge-based scheduling with the desired capabilities could work. By using fuzzy set theory, the applied knowledge representation technique covers the uncertainty inherent in the problem domain. Based on this knowledge representation, a classification of jobs according to their importance is defined which is then used for the straightforward generation of a schedule. A control strategy which comprises organizational, spatial, temporal, and chemical constraints is introduced. The strategy supports the dynamic relaxation of conflicting constraints in order to improve tentative schedules.

  2. Discovering relevance knowledge in data: a growing cell structures approach.

    PubMed

    Azuaje, F; Dubitzky, W; Black, N; Adamson, K

    2000-01-01

    Both information retrieval and case-based reasoning systems rely on effective and efficient selection of relevant data. Typically, relevance in such systems is approximated by similarity or indexing models. However, the definition of what makes data items similar or how they should be indexed is often nontrivial and time-consuming. Based on growing cell structure artificial neural networks, this paper presents a method that automatically constructs a case retrieval model from existing data. Within the case-based reasoning (CBR) framework, the method is evaluated for two medical prognosis tasks, namely, colorectal cancer survival and coronary heart disease risk prognosis. The results of the experiments suggest that the proposed method is effective and robust. To gain a deeper insight and understanding of the underlying mechanisms of the proposed model, a detailed empirical analysis of the models structural and behavioral properties is also provided.

  3. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  4. Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2014-02-01

    Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.

  5. Proportional Reasoning and the Visually Impaired

    ERIC Educational Resources Information Center

    Hilton, Geoff; Hilton, Annette; Dole, Shelley L.; Goos, Merrilyn; O'Brien, Mia

    2012-01-01

    Proportional reasoning is an important aspect of formal thinking that is acquired during the developmental years that approximate the middle years of schooling. Students who fail to acquire sound proportional reasoning often experience difficulties in subjects that require quantitative thinking, such as science, technology, engineering, and…

  6. Estimating the capacity value of concentrating solar power plants: A case study of the southwestern United States

    DOE PAGES

    Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul

    2012-01-27

    Here, we estimate the capacity value of concentrating solar power (CSP) plants without thermal energy storage in the southwestern U.S. Our results show that CSP plants have capacity values that are between 45% and 95% of maximum capacity, depending on their location and configuration. We also examine the sensitivity of the capacity value of CSP to a number of factors and show that capacity factor-based methods can provide reasonable approximations of reliability-based estimates.

  7. On the derivation of approximations to cellular automata models and the assumption of independence.

    PubMed

    Davies, K J; Green, J E F; Bean, N G; Binder, B J; Ross, J V

    2014-07-01

    Cellular automata are discrete agent-based models, generally used in cell-based applications. There is much interest in obtaining continuum models that describe the mean behaviour of the agents in these models. Previously, continuum models have been derived for agents undergoing motility and proliferation processes, however, these models only hold under restricted conditions. In order to narrow down the reason for these restrictions, we explore three possible sources of error in deriving the model. These sources are the choice of limiting arguments, the use of a discrete-time model as opposed to a continuous-time model and the assumption of independence between the state of sites. We present a rigorous analysis in order to gain a greater understanding of the significance of these three issues. By finding a limiting regime that accurately approximates the conservation equation for the cellular automata, we are able to conclude that the inaccuracy between our approximation and the cellular automata is completely based on the assumption of independence. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. The role of electron heat flux in guide-field magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hesse, Michael; Kuznetsova, Masha; Birn, Joachim

    2004-12-01

    A combination of analytical theory and particle-in-cell simulations are employed in order to investigate the electron dynamics near and at the site of guide field magnetic reconnection. A detailed analysis of the contributions to the reconnection electric field shows that both bulk inertia and pressure-based quasiviscous processes are important for the electrons. Analytic scaling demonstrates that conventional approximations for the electron pressure tensor behavior in the dissipation region fail, and that heat flux contributions need to be accounted for. Based on the evolution equation of the heat flux three tensor, which is derived in this paper, an approximate form ofmore » the relevant heat flux contributions to the pressure tensor is developed, which reproduces the numerical modeling result reasonably well. Based on this approximation, it is possible to develop a scaling of the electron current layer in the central dissipation region. It is shown that the pressure tensor contributions become important at the scale length defined by the electron Larmor radius in the guide magnetic field.« less

  9. THE REASONING METHODS AND REASONING ABILITY IN NORMAL AND MENTALLY RETARDED GIRLS AND THE REASONING ABILITY OF NORMAL AND MENTALLY RETARDED BOYS AND GIRLS.

    ERIC Educational Resources Information Center

    CAPOBIANCO, RUDOLPH J.; AND OTHERS

    A STUDY WAS MADE TO ESTABLISH AND ANALYZE THE METHODS OF SOLVING INDUCTIVE REASONING PROBLEMS BY MENTALLY RETARDED CHILDREN. THE MAJOR OBJECTIVES WERE--(1) TO EXPLORE AND DESCRIBE REASONING IN MENTALLY RETARDED CHILDREN, (2) TO COMPARE THEIR METHODS WITH THOSE UTILIZED BY NORMAL CHILDREN OF APPROXIMATELY THE SAME MENTAL AGE, (3) TO EXPLORE THE…

  10. Accelerating cross-validation with total variation and its application to super-resolution imaging

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Ikeda, Shiro; Akiyama, Kazunori; Kabashima, Yoshiyuki

    2017-12-01

    We develop an approximation formula for the cross-validation error (CVE) of a sparse linear regression penalized by ℓ_1-norm and total variation terms, which is based on a perturbative expansion utilizing the largeness of both the data dimensionality and the model. The developed formula allows us to reduce the necessary computational cost of the CVE evaluation significantly. The practicality of the formula is tested through application to simulated black-hole image reconstruction on the event-horizon scale with super resolution. The results demonstrate that our approximation reproduces the CVE values obtained via literally conducted cross-validation with reasonably good precision.

  11. Thermometric titration of acids in pyridine.

    PubMed

    Vidal, R; Mukherjee, L M

    1974-04-01

    Thermometric titration of HClO(4), HI, HNO(3), HBr, picric acid o-nitrobenzoic acid, 2,4- and 2,5-dinitrophenol, acetic acid and benzoic acid have been attempted in pyridine as solvent, using 1,3-diphenylguanidine as the base. Except in the case of 2,5-dinitrophenol, acetic acid and benzoic acid, the results are, in general, reasonably satisfactory. The approximate molar heats of neutralization have been calculated.

  12. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    PubMed Central

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  13. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  14. Improving Perception to Make Distant Connections Closer

    PubMed Central

    Goldstone, Robert L.; Landy, David; Brunel, Lionel C.

    2011-01-01

    One of the challenges for perceptually grounded accounts of high-level cognition is to explain how people make connections and draw inferences between situations that superficially have little in common. Evidence suggests that people draw these connections even without having explicit, verbalizable knowledge of their bases. Instead, the connections are based on sub-symbolic representations that are grounded in perception, action, and space. One reason why people are able to spontaneously see relations between situations that initially appear to be unrelated is that their eventual perceptions are not restricted to initial appearances. Training and strategic deployment allow our perceptual processes to deliver outputs that would have otherwise required abstract or formal reasoning. Even without people having any privileged access to the internal operations of perceptual modules, these modules can be systematically altered so as to better serve our high-level reasoning needs. Moreover, perceptually based processes can be altered in a number of ways to closely approximate formally sanctioned computations. To be concrete about mechanisms of perceptual change, we present 21 illustrations of ways in which we alter, adjust, and augment our perceptual systems with the intention of having them better satisfy our needs. PMID:22207861

  15. Metacognition and reasoning

    PubMed Central

    Fletcher, Logan; Carruthers, Peter

    2012-01-01

    This article considers the cognitive architecture of human meta-reasoning: that is, metacognition concerning one's own reasoning and decision-making. The view we defend is that meta-reasoning is a cobbled-together skill comprising diverse self-management strategies acquired through individual and cultural learning. These approximate the monitoring-and-control functions of a postulated adaptive system for metacognition by recruiting mechanisms that were designed for quite other purposes. PMID:22492753

  16. 38 CFR 3.102 - Reasonable doubt.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... degree of disability, or any other point, such doubt will be resolved in favor of the claimant. By reasonable doubt is meant one which exists because of an approximate balance of positive and negative...

  17. Thermal refraction focusing in planar index-antiguided lasers.

    PubMed

    Casperson, Lee W; Dittli, Adam; Her, Tsing-Hua

    2013-03-15

    Thermal refraction focusing in planar index-antiguided lasers is investigated both theoretically and experimentally. An analytical model based on zero-field approximation is presented for treating the combined effects of index antiguiding and thermal focusing. At very low pumping power, the mode is antiguided by the amplifier boundary, whereas at high pumping power it narrows due to thermal focusing. Theoretical results are in reasonable agreement with experimental data.

  18. Calculation of wing response to gusts and blast waves with vortex lift effect

    NASA Technical Reports Server (NTRS)

    Chao, D. C.; Lan, C. E.

    1983-01-01

    A numerical study of the response of aircraft wings to atmospheric gusts and to nuclear explosions when flying at subsonic speeds is presented. The method is based upon unsteady quasi-vortex lattice method, unsteady suction analogy and Pade approximant. The calculated results, showing vortex lag effect, yield reasonable agreement with experimental data for incremental lift on wings in gust penetration and due to nuclear blast waves.

  19. Two-dimensional character of internal rotation of furfural and other five-member heterocyclic aromatic aldehydes

    NASA Astrophysics Data System (ADS)

    Bataev, Vadim A.; Pupyshev, Vladimir I.; Godunov, Igor A.

    2016-05-01

    The features of nuclear motion corresponding to the rotation of the formyl group (CHO) are studied for the molecules of furfural and some other five-member heterocyclic aromatic aldehydes by the use of MP2/6-311G** quantum chemical approximation. It is demonstrated that the traditional one-dimensional models of internal rotation for the molecules studied have only limited applicability. The reason is the strong kinematic interaction of the rotation of the CHO group and out-of-plane CHO deformation that is realized for the molecules under consideration. The computational procedure based on the two-dimensional approximation is considered for low lying vibrational states as more adequate to the problem.

  20. Approximate maximum likelihood decoding of block codes

    NASA Technical Reports Server (NTRS)

    Greenberger, H. J.

    1979-01-01

    Approximate maximum likelihood decoding algorithms, based upon selecting a small set of candidate code words with the aid of the estimated probability of error of each received symbol, can give performance close to optimum with a reasonable amount of computation. By combining the best features of various algorithms and taking care to perform each step as efficiently as possible, a decoding scheme was developed which can decode codes which have better performance than those presently in use and yet not require an unreasonable amount of computation. The discussion of the details and tradeoffs of presently known efficient optimum and near optimum decoding algorithms leads, naturally, to the one which embodies the best features of all of them.

  1. Survey of HEPA filter experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbaugh, E.H.

    1982-07-01

    A survey of high efficiency particulate air (HEPA) filter applications and experience at Department of Energy (DOE) sites was conducted to provide an overview of the reasons and magnitude of HEPA filter changeouts and failures. Results indicated that approximately 58% of the filters surveyed were changed out in the three year study period, and some 18% of all filters were changed out more than once. Most changeouts (63%) were due to the existence of a high pressure drop across the filter, indicative of filter plugging. Other reasons for changeout included leak-test failure (15%), preventive maintenance service life limit (13%), suspectedmore » damage (5%) and radiation buildup (4%). Filter failures occurred with approximately 12% of all installed filters. Of these failures, most (64%) occurred for unknown or unreported reasons. Handling or installation damage accounted for an additional 19% of reported failures. Media ruptures, filter-frame failures and seal failures each accounted for approximately 5 to 6% of the reported failures.« less

  2. Energy conservation - A test for scattering approximations

    NASA Technical Reports Server (NTRS)

    Acquista, C.; Holland, A. C.

    1980-01-01

    The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.

  3. Bifurcations in models of a society of reasonable contrarians and conformists

    NASA Astrophysics Data System (ADS)

    Bagnoli, Franco; Rechtman, Raúl

    2015-10-01

    We study models of a society composed of a mixture of conformist and reasonable contrarian agents that at any instant hold one of two opinions. Conformists tend to agree with the average opinion of their neighbors and reasonable contrarians tend to disagree, but revert to a conformist behavior in the presence of an overwhelming majority, in line with psychological experiments. The model is studied in the mean-field approximation and on small-world and scale-free networks. In the mean-field approximation, a large fraction of conformists triggers a polarization of the opinions, a pitchfork bifurcation, while a majority of reasonable contrarians leads to coherent oscillations, with an alternation of period-doubling and pitchfork bifurcations up to chaos. Similar scenarios are obtained by changing the fraction of long-range rewiring and the parameter of scale-free networks related to the average connectivity.

  4. Bifurcations in models of a society of reasonable contrarians and conformists.

    PubMed

    Bagnoli, Franco; Rechtman, Raúl

    2015-10-01

    We study models of a society composed of a mixture of conformist and reasonable contrarian agents that at any instant hold one of two opinions. Conformists tend to agree with the average opinion of their neighbors and reasonable contrarians tend to disagree, but revert to a conformist behavior in the presence of an overwhelming majority, in line with psychological experiments. The model is studied in the mean-field approximation and on small-world and scale-free networks. In the mean-field approximation, a large fraction of conformists triggers a polarization of the opinions, a pitchfork bifurcation, while a majority of reasonable contrarians leads to coherent oscillations, with an alternation of period-doubling and pitchfork bifurcations up to chaos. Similar scenarios are obtained by changing the fraction of long-range rewiring and the parameter of scale-free networks related to the average connectivity.

  5. A geometric modeler based on a dual-geometry representation polyhedra and rational b-splines

    NASA Technical Reports Server (NTRS)

    Klosterman, A. L.

    1984-01-01

    For speed and data base reasons, solid geometric modeling of large complex practical systems is usually approximated by a polyhedra representation. Precise parametric surface and implicit algebraic modelers are available but it is not yet practical to model the same level of system complexity with these precise modelers. In response to this contrast the GEOMOD geometric modeling system was built so that a polyhedra abstraction of the geometry would be available for interactive modeling without losing the precise definition of the geometry. Part of the reason that polyhedra modelers are effective is that all bounded surfaces can be represented in a single canonical format (i.e., sets of planar polygons). This permits a very simple and compact data structure. Nonuniform rational B-splines are currently the best representation to describe a very large class of geometry precisely with one canonical format. The specific capabilities of the modeler are described.

  6. Prediction of destabilizing blade tip forces for shrouded and unshrouded turbines

    NASA Technical Reports Server (NTRS)

    Qiu, Y. J.; Martinezsanchez, M.

    1985-01-01

    The effect of a nonuniform flow field on the Alford force calculation is investigated. The ideas used here are based on those developed by Horlock and Greitzer. It is shown that the nonuniformity of the flow field does contribute to the Alford force calculation. An attempt is also made to include the effect of whirl speed. The values predicted by the model are compared with those obtained experimentally by Urlicks and Wohlrab. The possibility of using existing turbine tip loss correlations to predict beta is also exploited. The nonuniform flow field induced by the tip clearnance variation tends to increase the resultant destabilizing force over and above what would be predicted on the basis of the local variation of efficiency. On the one hand, the pressure force due to the nonuniform inlet and exit pressure also plays a part even for unshrouded blades, and this counteracts the flow field effects, so that the simple Alford prediction remains a reasonable approximation. Once the efficiency variation with clearance is known, the presented model gives a slightly overpredicted, but reasonably accurate destabilizing force. In the absence of efficiency vs. clearance data, an empirical tip loss coefficient can be used to give a reasonable prediction of destabilizing force. To a first approximation, the whirl does have a damping effect, but only of small magnitude, and thus it can be ignored for some purposes.

  7. 77 FR 41742 - In the Matter of: Humane Restraint, Inc., 912 Bethel Circle, Waunakee, WI 53597, Respondent...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-16

    ... under Export Control Classification Number (``ECCN'') 0A982, controlled for Crime Control reasons, and..., classified under ECCN 0A982, controlled for Crime Control reasons, and valued at approximately $112, from the... kit, items classified under ECCN 0A982, controlled for Crime Control reasons, and valued at...

  8. Terminated Trials in the ClinicalTrials.gov Results Database: Evaluation of Availability of Primary Outcome Data and Reasons for Termination

    PubMed Central

    Williams, Rebecca J.; Tse, Tony; DiPiazza, Katelyn; Zarin, Deborah A.

    2015-01-01

    Background Clinical trials that end prematurely (or “terminate”) raise financial, ethical, and scientific concerns. The extent to which the results of such trials are disseminated and the reasons for termination have not been well characterized. Methods and Findings A cross-sectional, descriptive study of terminated clinical trials posted on the ClinicalTrials.gov results database as of February 2013 was conducted. The main outcomes were to characterize the availability of primary outcome data on ClinicalTrials.gov and in the published literature and to identify the reasons for trial termination. Approximately 12% of trials with results posted on the ClinicalTrials.gov results database (905/7,646) were terminated. Most trials were terminated for reasons other than accumulated data from the trial (68%; 619/905), with an insufficient rate of accrual being the lead reason for termination among these trials (57%; 350/619). Of the remaining trials, 21% (193/905) were terminated based on data from the trial (findings of efficacy or toxicity) and 10% (93/905) did not specify a reason. Overall, data for a primary outcome measure were available on ClinicalTrials.gov and in the published literature for 72% (648/905) and 22% (198/905) of trials, respectively. Primary outcome data were reported on the ClinicalTrials.gov results database and in the published literature more frequently (91% and 46%, respectively) when the decision to terminate was based on data from the trial. Conclusions Trials terminate for a variety of reasons, not all of which reflect failures in the process or an inability to achieve the intended goals. Primary outcome data were reported most often when termination was based on data from the trial. Further research is needed to identify best practices for disseminating the experience and data resulting from terminated trials in order to help ensure maximal societal benefit from the investments of trial participants and others involved with the study. PMID:26011295

  9. Terminated Trials in the ClinicalTrials.gov Results Database: Evaluation of Availability of Primary Outcome Data and Reasons for Termination.

    PubMed

    Williams, Rebecca J; Tse, Tony; DiPiazza, Katelyn; Zarin, Deborah A

    2015-01-01

    Clinical trials that end prematurely (or "terminate") raise financial, ethical, and scientific concerns. The extent to which the results of such trials are disseminated and the reasons for termination have not been well characterized. A cross-sectional, descriptive study of terminated clinical trials posted on the ClinicalTrials.gov results database as of February 2013 was conducted. The main outcomes were to characterize the availability of primary outcome data on ClinicalTrials.gov and in the published literature and to identify the reasons for trial termination. Approximately 12% of trials with results posted on the ClinicalTrials.gov results database (905/7,646) were terminated. Most trials were terminated for reasons other than accumulated data from the trial (68%; 619/905), with an insufficient rate of accrual being the lead reason for termination among these trials (57%; 350/619). Of the remaining trials, 21% (193/905) were terminated based on data from the trial (findings of efficacy or toxicity) and 10% (93/905) did not specify a reason. Overall, data for a primary outcome measure were available on ClinicalTrials.gov and in the published literature for 72% (648/905) and 22% (198/905) of trials, respectively. Primary outcome data were reported on the ClinicalTrials.gov results database and in the published literature more frequently (91% and 46%, respectively) when the decision to terminate was based on data from the trial. Trials terminate for a variety of reasons, not all of which reflect failures in the process or an inability to achieve the intended goals. Primary outcome data were reported most often when termination was based on data from the trial. Further research is needed to identify best practices for disseminating the experience and data resulting from terminated trials in order to help ensure maximal societal benefit from the investments of trial participants and others involved with the study.

  10. MULTIPROCESSOR AND DISTRIBUTED PROCESSING BIBLIOGRAPHIC DATA BASE SOFTWARE SYSTEM

    NASA Technical Reports Server (NTRS)

    Miya, E. N.

    1994-01-01

    Multiprocessors and distributed processing are undergoing increased scientific scrutiny for many reasons. It is more and more difficult to keep track of the existing research in these fields. This package consists of a large machine-readable bibliographic data base which, in addition to the usual keyword searches, can be used for producing citations, indexes, and cross-references. The data base is compiled from smaller existing multiprocessing bibliographies, and tables of contents from journals and significant conferences. There are approximately 4,000 entries covering topics such as parallel and vector processing, networks, supercomputers, fault-tolerant computers, and cellular automata. Each entry is represented by 21 fields including keywords, author, referencing book or journal title, volume and page number, and date and city of publication. The data base contains UNIX 'refer' formatted ASCII data and can be implemented on any computer running under the UNIX operating system. The data base requires approximately one megabyte of secondary storage. The documentation for this program is included with the distribution tape, although it can be purchased for the price below. This bibliography was compiled in 1985 and updated in 1988.

  11. Higher-level fusion for military operations based on abductive inference: proof of principle

    NASA Astrophysics Data System (ADS)

    Pantaleev, Aleksandar V.; Josephson, John

    2006-04-01

    The ability of contemporary military commanders to estimate and understand complicated situations already suffers from information overload, and the situation can only grow worse. We describe a prototype application that uses abductive inferencing to fuse information from multiple sensors to evaluate the evidence for higher-level hypotheses that are close to the levels of abstraction needed for decision making (approximately JDL levels 2 and 3). Abductive inference (abduction, inference to the best explanation) is a pattern of reasoning that occurs naturally in diverse settings such as medical diagnosis, criminal investigations, scientific theory formation, and military intelligence analysis. Because abduction is part of common-sense reasoning, implementations of it can produce reasoning traces that are very human understandable. Automated abductive inferencing can be deployed to augment human reasoning, taking advantage of computation to process large amounts of information, and to bypass limits to human attention and short-term memory. We illustrate the workings of the prototype system by describing an example of its use for small-unit military operations in an urban setting. Knowledge was encoded as it might be captured prior to engagement from a standard military decision making process (MDMP) and analysis of commander's priority intelligence requirements (PIR). The system is able to reasonably estimate the evidence for higher-level hypotheses based on information from multiple sensors. Its inference processes can be examined closely to verify correctness. Decision makers can override conclusions at any level and changes will propagate appropriately.

  12. Two-dimensional character of internal rotation of furfural and other five-member heterocyclic aromatic aldehydes.

    PubMed

    Bataev, Vadim A; Pupyshev, Vladimir I; Godunov, Igor A

    2016-05-15

    The features of nuclear motion corresponding to the rotation of the formyl group (CHO) are studied for the molecules of furfural and some other five-member heterocyclic aromatic aldehydes by the use of MP2/6-311G** quantum chemical approximation. It is demonstrated that the traditional one-dimensional models of internal rotation for the molecules studied have only limited applicability. The reason is the strong kinematic interaction of the rotation of the CHO group and out-of-plane CHO deformation that is realized for the molecules under consideration. The computational procedure based on the two-dimensional approximation is considered for low lying vibrational states as more adequate to the problem. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Generation of tunable laser sidebands in the far-infrared region

    NASA Technical Reports Server (NTRS)

    Farhoomand, J.; Frerking, M. A.; Pickett, H. M.; Blake, G. A.

    1985-01-01

    In recent years, several techniques have been developed for the generation of tunable coherent radiation at submillimeter and far-infrared (FIR) wavelengths. The harmonic generation of conventional microwave sources has made it possible to produce spectrometers capable of continuous operation to above 1000 GHz. However, the sensitivity of such instruments drops rapidly with frequency. For this reason, a great deal of attention is given to laser-based methods, which could cover the entire FIR region. Tunable FIR radiation (approximately 100 nW) has been produced by mixing FIR molecular lasers and conventional microwave sources in both open and closed mixer mounts. The present investigation is concerned with improvements in this approach. These improvements provide approximately thirty times more output power than previous results.

  14. Viscous Rayleigh-Taylor instability in spherical geometry

    NASA Astrophysics Data System (ADS)

    Mikaelian, Karnig O.

    2016-02-01

    We consider viscous fluids in spherical geometry, a lighter fluid supporting a heavier one. Chandrasekhar [Q. J. Mech. Appl. Math. 8, 1 (1955), 10.1093/qjmam/8.1.1] analyzed this unstable configuration providing the equations needed to find, numerically, the exact growth rates for the ensuing Rayleigh-Taylor instability. He also derived an analytic but approximate solution. We point out a weakness in his approximate dispersion relation (DR) and offer a somewhat improved one. A third DR, based on transforming a planar DR into a spherical one, suffers no unphysical predictions and compares reasonably well with the exact work of Chandrasekhar and a more recent numerical analysis of the problem [Terrones and Carrara, Phys. Fluids 27, 054105 (2015), 10.1063/1.4921648].

  15. New active substances authorized in the United Kingdom between 1972 and 1994

    PubMed Central

    Jefferys, David B; Leakey, Diane; Lewis, John A; Payne, Sandra; Rawlins, Michael D

    1998-01-01

    Aims The study was undertaken to assemble a list of all new active medicinal substances authorised in the United Kingdom between 1972 and 1994; to assess whether the pattern of introductions had changed; and to examine withdrawal rates and the reasons for withdrawal. Methods The identities of those new active substances whose manufacturers had obtained Product Licences between 1972 and 1994 were sought from the Medicines Control Agency's product data-base. For each substance relevant information was retrieved including the year of granting the Product Licence, its therapeutic class, whether currently authorised (and, if not, reason for withdrawal), and its nature (chemical, biological etc.). Results The Medicines Control Agency's data-base was cross-checked against two other data-bases for completeness. A total of 583 new active substances (in 579 products) were found to have been authorised over the study period. The annual rates of authorisation varied widely (9 to 40 per year). Whilst there was no evidence for any overall change in the annual rates of authorising new chemical entities, there has been a trend for increasing numbers of new products of biological origin to be authorised in recent years. Fifty-nine of the 583 new active substances have been withdrawn (1 each for quality and efficacy, 22 for safety, and 35 for commercial reasons). Conclusions For reasons that are unclear there is marked heterogeneity in the annual rates of authorisation of new active substances. Their 10 year survival is approximately 88% with withdrawals being, predominantly, for commercial or safety reasons. This confirms the provisional nature of assessments about safety at the time when a new active substance is introduced into routine clinical practice, and emphasises the importance of pharmacovigilance. PMID:9491828

  16. Probabilistic Reasoning for Plan Robustness

    NASA Technical Reports Server (NTRS)

    Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.

    2005-01-01

    A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.

  17. A reinforcement learning-based architecture for fuzzy logic control

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1992-01-01

    This paper introduces a new method for learning to refine a rule-based fuzzy logic controller. A reinforcement learning technique is used in conjunction with a multilayer neural network model of a fuzzy controller. The approximate reasoning based intelligent control (ARIC) architecture proposed here learns by updating its prediction of the physical system's behavior and fine tunes a control knowledge base. Its theory is related to Sutton's temporal difference (TD) method. Because ARIC has the advantage of using the control knowledge of an experienced operator and fine tuning it through the process of learning, it learns faster than systems that train networks from scratch. The approach is applied to a cart-pole balancing system.

  18. BRYNTRN: A baryon transport model

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.

    1989-01-01

    The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.

  19. Fuzzy Behavior-Based Navigation for Planetary

    NASA Technical Reports Server (NTRS)

    Tunstel, Edward; Danny, Harrison; Lippincott, Tanya; Jamshidi, Mo

    1997-01-01

    Adaptive behavioral capabilities are necessary for robust rover navigation in unstructured and partially-mapped environments. A control approach is described which exploits the approximate reasoning capability of fuzzy logic to produce adaptive motion behavior. In particular, a behavior-based architecture for hierarchical fuzzy control of microrovers is presented. Its structure is described, as well as mechanisms of control decision-making which give rise to adaptive behavior. Control decisions for local navigation result from a consensus of recommendations offered only by behaviors that are applicable to current situations. Simulation predicts the navigation performance on a microrover in simplified Mars-analog terrain.

  20. Nature and magnitude of aromatic base stacking in DNA and RNA: Quantum chemistry, molecular mechanics, and experiment.

    PubMed

    Sponer, Jiří; Sponer, Judit E; Mládek, Arnošt; Jurečka, Petr; Banáš, Pavel; Otyepka, Michal

    2013-12-01

    Base stacking is a major interaction shaping up and stabilizing nucleic acids. During the last decades, base stacking has been extensively studied by experimental and theoretical methods. Advanced quantum-chemical calculations clarified that base stacking is a common interaction, which in the first approximation can be described as combination of the three most basic contributions to molecular interactions, namely, electrostatic interaction, London dispersion attraction and short-range repulsion. There is not any specific π-π energy term associated with the delocalized π electrons of the aromatic rings that cannot be described by the mentioned contributions. The base stacking can be rather reasonably approximated by simple molecular simulation methods based on well-calibrated common force fields although the force fields do not include nonadditivity of stacking, anisotropy of dispersion interactions, and some other effects. However, description of stacking association in condensed phase and understanding of the stacking role in biomolecules remain a difficult problem, as the net base stacking forces always act in a complex and context-specific environment. Moreover, the stacking forces are balanced with many other energy contributions. Differences in definition of stacking in experimental and theoretical studies are explained. Copyright © 2013 Wiley Periodicals, Inc.

  1. Combining qualitative and quantitative spatial and temporal information in a hierarchical structure: Approximate reasoning for plan execution monitoring

    NASA Technical Reports Server (NTRS)

    Hoebel, Louis J.

    1993-01-01

    The problem of plan generation (PG) and the problem of plan execution monitoring (PEM), including updating, queries, and resource-bounded replanning, have different reasoning and representation requirements. PEM requires the integration of qualitative and quantitative information. PEM is the receiving of data about the world in which a plan or agent is executing. The problem is to quickly determine the relevance of the data, the consistency of the data with respect to the expected effects, and if execution should continue. Only spatial and temporal aspects of the plan are addressed for relevance in this work. Current temporal reasoning systems are deficient in computational aspects or expressiveness. This work presents a hybrid qualitative and quantitative system that is fully expressive in its assertion language while offering certain computational efficiencies. In order to proceed, methods incorporating approximate reasoning using hierarchies, notions of locality, constraint expansion, and absolute parameters need be used and are shown to be useful for the anytime nature of PEM.

  2. Sparse approximation problem: how rapid simulated annealing succeeds and fails

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Kabashima, Yoshiyuki

    2016-03-01

    Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.

  3. An approximation function for frequency constrained structural optimization

    NASA Technical Reports Server (NTRS)

    Canfield, R. A.

    1989-01-01

    The purpose is to examine a function for approximating natural frequency constraints during structural optimization. The nonlinearity of frequencies has posed a barrier to constructing approximations for frequency constraints of high enough quality to facilitate efficient solutions. A new function to represent frequency constraints, called the Rayleigh Quotient Approximation (RQA), is presented. Its ability to represent the actual frequency constraint results in stable convergence with effectively no move limits. The objective of the optimization problem is to minimize structural weight subject to some minimum (or maximum) allowable frequency and perhaps subject to other constraints such as stress, displacement, and gage size, as well. A reason for constraining natural frequencies during design might be to avoid potential resonant frequencies due to machinery or actuators on the structure. Another reason might be to satisy requirements of an aircraft or spacecraft's control law. Whatever the structure supports may be sensitive to a frequency band that must be avoided. Any of these situations or others may require the designer to insure the satisfaction of frequency constraints. A further motivation for considering accurate approximations of natural frequencies is that they are fundamental to dynamic response constraints.

  4. The frozen nucleon approximation in two-particle two-hole response functions

    DOE PAGES

    Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.; ...

    2017-07-10

    Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less

  5. The frozen nucleon approximation in two-particle two-hole response functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.

    Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less

  6. The 'Brick Wall' radio loss approximation and the performance of strong channel codes for deep space applications at high data rates

    NASA Technical Reports Server (NTRS)

    Shambayati, Shervin

    2001-01-01

    In order to evaluate performance of strong channel codes in presence of imperfect carrier phase tracking for residual carrier BPSK modulation in this paper an approximate 'brick wall' model is developed which is independent of the channel code type for high data rates. It is shown that this approximation is reasonably accurate (less than 0.7dB for low FERs for (1784,1/6) code and less than 0.35dB for low FERs for (5920,1/6) code). Based on the approximation's accuracy, it is concluded that the effects of imperfect carrier tracking are more or less independent of the channel code type for strong channel codes. Therefore, the advantage that one strong channel code has over another with perfect carrier tracking translates to nearly the same advantage under imperfect carrier tracking conditions. This will allow the link designers to incorporate projected channel code performance of strong channel codes into their design tables without worrying about their behavior in the face of imperfect carrier phase tracking.

  7. A variable vertical resolution weather model with an explicitly resolved planetary boundary layer

    NASA Technical Reports Server (NTRS)

    Helfand, H. M.

    1981-01-01

    A version of the fourth order weather model incorporating surface wind stress data from SEASAT A scatterometer observations is presented. The Monin-Obukhov similarity theory is used to relate winds at the top of the surface layer to surface wind stress. A reasonable approximation of surface fluxes of heat, moisture, and momentum are obtainable using this method. A Richardson number adjustment scheme based on the ideas of Chang is used to allow for turbulence effects.

  8. Multi-state trajectory approach to non-adiabatic dynamics: General formalism and the active state trajectory approximation

    NASA Astrophysics Data System (ADS)

    Tao, Guohua

    2017-07-01

    A general theoretical framework is derived for the recently developed multi-state trajectory (MST) approach from the time dependent Schrödinger equation, resulting in equations of motion for coupled nuclear-electronic dynamics equivalent to Hamilton dynamics or Heisenberg equation based on a new multistate Meyer-Miller (MM) model. The derived MST formalism incorporates both diabatic and adiabatic representations as limiting cases and reduces to Ehrenfest or Born-Oppenheimer dynamics in the mean-field or the single-state limits, respectively. In the general multistate formalism, nuclear dynamics is represented in terms of a set of individual state-specific trajectories, while in the active state trajectory (AST) approximation, only one single nuclear trajectory on the active state is propagated with its augmented images running on all other states. The AST approximation combines the advantages of consistent nuclear-coupled electronic dynamics in the MM model and the single nuclear trajectory in the trajectory surface hopping (TSH) treatment and therefore may provide a potential alternative to both Ehrenfest and TSH methods. The resulting algorithm features in a consistent description of coupled electronic-nuclear dynamics and excellent numerical stability. The implementation of the MST approach to several benchmark systems involving multiple nonadiabatic transitions and conical intersection shows reasonably good agreement with exact quantum calculations, and the results in both representations are similar in accuracy. The AST treatment also reproduces the exact results reasonably, sometimes even quantitatively well, with a better performance in the adiabatic representation.

  9. Lognormal Approximations of Fault Tree Uncertainty Distributions.

    PubMed

    El-Shanawany, Ashraf Ben; Ardron, Keith H; Walker, Simon P

    2018-01-26

    Fault trees are used in reliability modeling to create logical models of fault combinations that can lead to undesirable events. The output of a fault tree analysis (the top event probability) is expressed in terms of the failure probabilities of basic events that are input to the model. Typically, the basic event probabilities are not known exactly, but are modeled as probability distributions: therefore, the top event probability is also represented as an uncertainty distribution. Monte Carlo methods are generally used for evaluating the uncertainty distribution, but such calculations are computationally intensive and do not readily reveal the dominant contributors to the uncertainty. In this article, a closed-form approximation for the fault tree top event uncertainty distribution is developed, which is applicable when the uncertainties in the basic events of the model are lognormally distributed. The results of the approximate method are compared with results from two sampling-based methods: namely, the Monte Carlo method and the Wilks method based on order statistics. It is shown that the closed-form expression can provide a reasonable approximation to results obtained by Monte Carlo sampling, without incurring the computational expense. The Wilks method is found to be a useful means of providing an upper bound for the percentiles of the uncertainty distribution while being computationally inexpensive compared with full Monte Carlo sampling. The lognormal approximation method and Wilks's method appear attractive, practical alternatives for the evaluation of uncertainty in the output of fault trees and similar multilinear models. © 2018 Society for Risk Analysis.

  10. Approximation methods for stochastic petri nets

    NASA Technical Reports Server (NTRS)

    Jungnitz, Hauke Joerg

    1992-01-01

    Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay equivalence often fails to converge, while flow equivalent aggregation can lead to potentially bad results if a strong dependence of the mean completion time on the interarrival process exists.

  11. Counterfactual reasoning: From childhood to adulthood

    PubMed Central

    Rafetseder, Eva; Schwitalla, Maria; Perner, Josef

    2013-01-01

    The objective of this study was to describe the developmental progression of counterfactual reasoning from childhood to adulthood. In contrast to the traditional view, it was recently reported by Rafetseder and colleagues that even a majority of 6-year-old children do not engage in counterfactual reasoning when asked counterfactual questions (Child Development, 2010, Vol. 81, pp. 376–389). By continuing to use the same method, the main result of the current Study 1 was that performance of the 9- to 11-year-olds was comparable to that of the 6-year-olds, whereas the 12- to 14-year-olds approximated adult performance. Study 2, using an intuitively simpler task based on Harris and colleagues (Cognition, 1996, Vol. 61, pp. 233–259), resulted in a similar conclusion, specifically that the ability to apply counterfactual reasoning is not fully developed in all children before 12 years of age. We conclude that children who failed our tasks seem to lack an understanding of what needs to be changed (events that are causally dependent on the counterfactual assumption) and what needs to be left unchanged and so needs to be kept as it actually happened. Alternative explanations, particularly executive functioning, are discussed in detail. PMID:23219156

  12. Time-Dependent Hartree-Fock Approach to Nuclear Pasta at Finite Temperature

    NASA Astrophysics Data System (ADS)

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.

    2013-03-01

    We present simulations of neutron-rich matter at subnuclear densities, like supernova matter, with the time-dependent Hartree-Fock approximation at temperatures of several MeV. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. This matter evolves into spherical, rod-like, and slab-like shapes and mixtures thereof. The simulations employ a full Skyrme interaction in a periodic three-dimensional grid. By an improved morphological analysis based on Minkowski functionals, all eight pasta shapes can be uniquely identified by the sign of only two valuations, namely the Euler characteristic and the integral mean curvature.

  13. Analytical approximations for the oscillators with anti-symmetric quadratic nonlinearity

    NASA Astrophysics Data System (ADS)

    Alal Hosen, Md.; Chowdhury, M. S. H.; Yeakub Ali, Mohammad; Faris Ismail, Ahmad

    2017-12-01

    A second-order ordinary differential equation involving anti-symmetric quadratic nonlinearity changes sign. The behaviour of the oscillators with an anti-symmetric quadratic nonlinearity is assumed to oscillate different in the positive and negative directions. In this reason, Harmonic Balance Method (HBM) cannot be directly applied. The main purpose of the present paper is to propose an analytical approximation technique based on the HBM for obtaining approximate angular frequencies and the corresponding periodic solutions of the oscillators with anti-symmetric quadratic nonlinearity. After applying HBM, a set of complicated nonlinear algebraic equations is found. Analytical approach is not always fruitful for solving such kinds of nonlinear algebraic equations. In this article, two small parameters are found, for which the power series solution produces desired results. Moreover, the amplitude-frequency relationship has also been determined in a novel analytical way. The presented technique gives excellent results as compared with the corresponding numerical results and is better than the existing ones.

  14. The TRMM Multi-satellite Precipitation Analysis (TMPA): Quasi-Global Precipitation Estimates at Fine Scales

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Gu, Guojun; Nelkin, Eric J.; Bowman, Kenneth P.; Stocker, Erich; Wolff, David B.

    2006-01-01

    The TRMM Multi-satellite Precipitation Analysis (TMPA) provides a calibration-based sequential scheme for combining multiple precipitation estimates from satellites, as well as gauge analyses where feasible, at fine scales (0.25 degrees x 0.25 degrees and 3-hourly). It is available both after and in real time, based on calibration by the TRMM Combined Instrument and TRMM Microwave Imager precipitation products, respectively. Only the after-real-time product incorporates gauge data at the present. The data set covers the latitude band 50 degrees N-S for the period 1998 to the delayed present. Early validation results are as follows: The TMPA provides reasonable performance at monthly scales, although it is shown to have precipitation rate dependent low bias due to lack of sensitivity to low precipitation rates in one of the input products (based on AMSU-B). At finer scales the TMPA is successful at approximately reproducing the surface-observation-based histogram of precipitation, as well as reasonably detecting large daily events. The TMPA, however, has lower skill in correctly specifying moderate and light event amounts on short time intervals, in common with other fine-scale estimators. Examples are provided of a flood event and diurnal cycle determination.

  15. Incorporation of varying types of temporal data in a neural network

    NASA Technical Reports Server (NTRS)

    Cohen, M. E.; Hudson, D. L.

    1992-01-01

    Most neural network models do not specifically deal with temporal data. Handling of these variables is complicated by the different uses to which temporal data are put, depending on the application. Even within the same application, temporal variables are often used in a number of different ways. In this paper, types of temporal data are discussed, along with their implications for approximate reasoning. Methods for integrating approximate temporal reasoning into existing neural network structures are presented. These methods are illustrated in a medical application for diagnosis of graft-versus-host disease which requires the use of several types of temporal data.

  16. An equation of state for polyurea aerogel based on multi-shock response

    NASA Astrophysics Data System (ADS)

    Aslam, T. D.; Gustavsen, R. L.; Bartram, B. D.

    2014-05-01

    The equation of state (EOS) of polyurea aerogel (PUA) is examined through both single shock Hugoniot data as well as more recent multi-shock compression experiments performed on the LANL 2-stage gas gun. A simple conservative Lagrangian numerical scheme, utilizing total variation diminishing (TVD) interpolation and an approximate Riemann solver, will be presented as well as the methodology of calibration. It will been demonstrated that a p-a model based on a Mie-Gruneisen fitting form for the solid material can reasonably replicate multi-shock compression response at a variety of initial densities; such a methodology will be presented for a commercially available polyurea aerogel.

  17. Polynomial Approximation of Functions: Historical Perspective and New Tools

    ERIC Educational Resources Information Center

    Kidron, Ivy

    2003-01-01

    This paper examines the effect of applying symbolic computation and graphics to enhance students' ability to move from a visual interpretation of mathematical concepts to formal reasoning. The mathematics topics involved, Approximation and Interpolation, were taught according to their historical development, and the students tried to follow the…

  18. Metrics for Labeled Markov Systems

    NASA Technical Reports Server (NTRS)

    Desharnais, Josee; Jagadeesan, Radha; Gupta, Vineet; Panangaden, Prakash

    1999-01-01

    Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various transitions. This paper addresses contextual reasoning principles for reasoning about more robust notions of "approximate" equivalence between concurrent interacting probabilistic systems. The present results indicate that:We develop a family of metrics between partial labeled Markov chains to formalize the notion of distance between processes. We show that processes at distance zero are bisimilar. We describe a decision procedure to compute the distance between two processes. We show that reasoning about approximate equivalence can be done compositionally by showing that process combinators do not increase distance. We introduce an asymptotic metric to capture asymptotic properties of Markov chains; and show that parallel composition does not increase asymptotic distance.

  19. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    NASA Astrophysics Data System (ADS)

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.; Wildey, T. M.; Pawlowski, R. P.

    2016-09-01

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts to apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier-Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.

  20. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, J.N., E-mail: jnshadi@sandia.gov; Department of Mathematics and Statistics, University of New Mexico; Smith, T.M.

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. In this respect the understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In this study we report on initial efforts tomore » apply integrated adjoint-based computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. Initial results are presented that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less

  1. Modelling default and likelihood reasoning as probabilistic reasoning

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. Likely and by default are in fact treated as duals in the same sense as possibility and necessity. To model these four forms probabilistically, a qualitative default probabilistic (QDP) logic and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequent results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  2. Accelerator-based epithermal neutron sources for boron neutron capture therapy of brain tumors.

    PubMed

    Blue, Thomas E; Yanch, Jacquelyn C

    2003-01-01

    This paper reviews the development of low-energy light ion accelerator-based neutron sources (ABNSs) for the treatment of brain tumors through an intact scalp and skull using boron neutron capture therapy (BNCT). A major advantage of an ABNS for BNCT over reactor-based neutron sources is the potential for siting within a hospital. Consequently, light-ion accelerators that are injectors to larger machines in high-energy physics facilities are not considered. An ABNS for BNCT is composed of: (1) the accelerator hardware for producing a high current charged particle beam, (2) an appropriate neutron-producing target and target heat removal system (HRS), and (3) a moderator/reflector assembly to render the flux energy spectrum of neutrons produced in the target suitable for patient irradiation. As a consequence of the efforts of researchers throughout the world, progress has been made on the design, manufacture, and testing of these three major components. Although an ABNS facility has not yet been built that has optimally assembled these three components, the feasibility of clinically useful ABNSs has been clearly established. Both electrostatic and radio frequency linear accelerators of reasonable cost (approximately 1.5 M dollars) appear to be capable of producing charged particle beams, with combinations of accelerated particle energy (a few MeV) and beam currents (approximately 10 mA) that are suitable for a hospital-based ABNS for BNCT. The specific accelerator performance requirements depend upon the charged particle reaction by which neutrons are produced in the target and the clinical requirements for neutron field quality and intensity. The accelerator performance requirements are more demanding for beryllium than for lithium as a target. However, beryllium targets are more easily cooled. The accelerator performance requirements are also more demanding for greater neutron field quality and intensity. Target HRSs that are based on submerged-jet impingement and the use of microchannels have emerged as viable target cooling options. Neutron fields for reactor-based neutron sources provide an obvious basis of comparison for ABNS field quality. This paper compares Monte Carlo calculations of neutron field quality for an ABNS and an idealized standard reactor neutron field (ISRNF). The comparison shows that with lithium as a target, an ABNS can create a neutron field with a field quality that is significantly better (by a factor of approximately 1.2, as judged by the relative biological effectiveness (RBE)-dose that can be delivered to a tumor at a depth of 6cm) than that for the ISRNF. Also, for a beam current of 10 mA, the treatment time is calculated to be reasonable (approximately 30 min) for the boron concentrations that have been assumed.

  3. Thermally Driven One-Fluid Electron-Proton Solar Wind: Eight-Moment Approximation

    NASA Astrophysics Data System (ADS)

    Olsen, Espen Lyngdal; Leer, Egil

    1996-05-01

    In an effort to improve the "classical" solar wind model, we study an eight-moment approximation hydrodynamic solar wind model, in which the full conservation equation for the heat conductive flux is solved together with the conservation equations for mass, momentum, and energy. We consider two different cases: In one model the energy flux needed to drive the solar wind is supplied as heat flux from a hot coronal base, where both the density and temperature are specified. In the other model, the corona is heated. In that model, the coronal base density and temperature are also specified, but the temperature increases outward from the coronal base due to a specified energy flux that is dissipated in the corona. The eight-moment approximation solutions are compared with the results from a "classical" solar wind model in which the collision-dominated gas expression for the heat conductive flux is used. It is shown that the "classical" expression for the heat conductive flux is generally not valid in the solar wind. In collisionless regions of the flow, the eight-moment approximation gives a larger thermalization of the heat conductive flux than the models using the collision-dominated gas approximation for the heat flux, but the heat flux is still larger than the "saturation heat flux." This leads to a breakdown of the electron distribution function, which turns negative in the collisionless region of the flow. By increasing the interaction between the electrons, the heat flux is reduced, and a reasonable shape is obtained on the distribution function. By solving the full set of equations consistent with the eight-moment distribution function for the electrons, we are thus able to draw inferences about the validity of the eight-moment description of the solar wind as well as the validity of the very commonly used collision-dominated gas approximation for the heat conductive flux in the solar wind.

  4. Model Based Reasoning by Introductory Students When Analyzing Earth Systems and Societal Challenges

    NASA Astrophysics Data System (ADS)

    Holder, L. N.; Herbert, B. E.

    2014-12-01

    Understanding how students use their conceptual models to reason about societal challenges involving societal issues such as natural hazard risk assessment, environmental policy and management, and energy resources can improve instructional activity design that directly impacts student motivation and literacy. To address this question, we created four laboratory exercises for an introductory physical geology course at Texas A&M University that engages students in authentic scientific practices by using real world problems and issues that affect societies based on the theory of situated cognition. Our case-study design allows us to investigate the various ways that students utilize model based reasoning to identify and propose solutions to societally relevant issues. In each of the four interventions, approximately 60 students in three sections of introductory physical geology were expected to represent and evaluate scientific data, make evidence-based claims about the data trends, use those claims to express conceptual models, and use their models to analyze societal challenges. Throughout each step of the laboratory exercise students were asked to justify their claims, models, and data representations using evidence and through the use of argumentation with peers. Cognitive apprenticeship was the foundation for instruction used to scaffold students so that in the first exercise they are given a partially completed model and in the last exercise students are asked to generate a conceptual model on their own. Student artifacts, including representation of earth systems, representation of scientific data, verbal and written explanations of models and scientific arguments, and written solutions to specific societal issues or environmental problems surrounding earth systems, were analyzed through the use of a rubric that modeled authentic expertise and students were sorted into three categories. Written artifacts were examined to identify student argumentation and justifications of solutions through the use of evidence and reasoning. Higher scoring students justified their solutions through evidence-based claims, while lower scoring students typically justified their solutions using anecdotal evidence, emotional ideologies, and naive and incomplete conceptions of earth systems.

  5. Modelling default and likelihood reasoning as probabilistic

    NASA Technical Reports Server (NTRS)

    Buntine, Wray

    1990-01-01

    A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.

  6. Impact as a general cause of extinction: A feasibility test

    NASA Technical Reports Server (NTRS)

    Raup, David M.

    1988-01-01

    Large body impact has been implicated as the possible cause of several extinction events. This is entirely plausible if one accepts two propositions: (1) that impacts of large comets and asteroids produce environmental effects severe enough to cause significant species extinctions and (2) that the estimates of comet and asteroid flux for the Phanerozoic are approximately correct. A resonable next step is to investigate the possibility that impact could be a significant factor in the broader Phanerozoic extinction record, not limited merely to a few events of mass extinction. Monte Carlo simulation experiments based on existing flux estimates and reasonable predictions of the relationship between bolide diameter and extinction are discussed. The simulation results raise the serious possibility that large body impact may be a more pervasive factor in extinction than has been assumed heretofore. At the very least, the experiments show that the comet and asteroid flux estimates combined with a reasonable kill curve produces a reasonable extinction record, complete with occasional mass extinctions and the irregular, lower intensity extinctions commonly called background extinction.

  7. Risk-Based Prioritization of Research for Aviation Security Using Logic-Evolved Decision Analysis

    NASA Technical Reports Server (NTRS)

    Eisenhawer, S. W.; Bott, T. F.; Sorokach, M. R.; Jones, F. P.; Foggia, J. R.

    2004-01-01

    The National Aeronautics and Space Administration is developing advanced technologies to reduce terrorist risk for the air transportation system. Decision support tools are needed to help allocate assets to the most promising research. An approach to rank ordering technologies (using logic-evolved decision analysis), with risk reduction as the metric, is presented. The development of a spanning set of scenarios using a logic-gate tree is described. Baseline risk for these scenarios is evaluated with an approximate reasoning model. Illustrative risk and risk reduction results are presented.

  8. VISAR Analysis in the Frequency Domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolan, D. H.; Specht, P.

    2017-05-18

    VISAR measurements are typically analyzed in the time domain, where velocity is approximately proportional to fringe shift. Moving to the frequency domain clarifies the limitations of this approximation and suggests several improvements. For example, optical dispersion preserves high-frequency information, so a zero-dispersion (air delay) interferometer does not provide optimal time resolution. Combined VISAR measurements can also improve time resolution. With adequate bandwidth and reasonable noise levels, it is quite possible to achieve better resolution than the VISAR approximation allows.

  9. VAPOR-PHASE TRANSPORT OF TRICHLOROETHENE IN AN INTERMEDIATE-SCALE VADOSE-ZONE SYSTEM: RETENTION PROCESSES AND TRACER-BASED PREDICTION

    PubMed Central

    Costanza-Robinson, Molly S.; Carlson, Tyson D.; Brusseau, Mark L.

    2013-01-01

    Gas-phase miscible-displacement experiments were conducted using a large weighing lysimeter to evaluate retention processes for volatile organic compounds (VOCs) in water-unsaturated (vadoze-zone) systems, and to test the utility of gas-phase tracers for predicting VOC retardation. Trichloroethene (TCE) served as a model VOC, while trichlorofluoromethane (CFM) and heptane were used as partitioning tracers to independently characterize retention by water and the air-water interface, respectively. Retardation factors for TCE ranged between 1.9 and 3.5, depending on water content. The results indicate that dissolution into the bulk water was the primary retention mechanism for TCE under all conditions studied, contributing approximately two thirds of the total measured retention. Accumulation at the air-water interface comprised a significant fraction of the observed retention for all experiments, with an average contribution of approximately 24%. Sorption to the solid phase contributed approximately 10% to retention. Water contents and air-water interfacial areas estimated based on the CFM and heptane tracer data, respectively, were similar to independently measured values. Retardation factors for TCE predicted using the partitioning-tracer data were in reasonable agreement with the measured values. These results suggest that gas-phase tracer tests hold promise for characterizing the retention and transport of VOCs in the vadose-zone. PMID:23333418

  10. Origin of spin reorientation transitions in antiferromagnetic MnPt-based alloys

    NASA Astrophysics Data System (ADS)

    Chang, P.-H.; Zhuravlev, I. A.; Belashchenko, K. D.

    2018-04-01

    Antiferromagnetic MnPt exhibits a spin reorientation transition (SRT) as a function of temperature, and off-stoichiometric Mn-Pt alloys also display SRTs as a function of concentration. The magnetocrystalline anisotropy in these alloys is studied using first-principles calculations based on the coherent potential approximation and the disordered local moment method. The anisotropy is fairly small and sensitive to the variations in composition and temperature due to the cancellation of large contributions from different parts of the Brillouin zone. Concentration and temperature-driven SRTs are found in reasonable agreement with experimental data. Contributions from specific band-structure features are identified and used to explain the origin of the SRTs.

  11. A machine independent expert system for diagnosing environmentally induced spacecraft anomalies

    NASA Technical Reports Server (NTRS)

    Rolincik, Mark J.

    1991-01-01

    A new rule-based, machine independent analytical tool for diagnosing spacecraft anomalies, the EnviroNET expert system, was developed. Expert systems provide an effective method for storing knowledge, allow computers to sift through large amounts of data pinpointing significant parts, and most importantly, use heuristics in addition to algorithms which allow approximate reasoning and inference, and the ability to attack problems not rigidly defines. The EviroNET expert system knowledge base currently contains over two hundred rules, and links to databases which include past environmental data, satellite data, and previous known anomalies. The environmental causes considered are bulk charging, single event upsets (SEU), surface charging, and total radiation dose.

  12. Discrimination of Mixed Taste Solutions using Ultrasonic Wave and Soft Computing

    NASA Astrophysics Data System (ADS)

    Kojima, Yohichiro; Kimura, Futoshi; Mikami, Tsuyoshi; Kitama, Masataka

    In this study, ultrasonic wave acoustic properties of mixed taste solutions were investigated, and the possibility of taste sensing based on the acoustical properties obtained was examined. In previous studies, properties of solutions were discriminated based on sound velocity, amplitude and frequency characteristics of ultrasonic waves propagating through the five basic taste solutions and marketed beverages. However, to make this method applicable to beverages that contain many taste substances, further studies are required. In this paper, the waveform of an ultrasonic wave with frequency of approximately 5 MHz propagating through mixed solutions composed of sweet and salty substance was measured. As a result, differences among solutions were clearly observed as differences in their properties. Furthermore, these mixed solutions were discriminated by a self-organizing neural network. The ratio of volume in their mixed solutions was estimated by a distance-type fuzzy reasoning method. Therefore, the possibility of taste sensing was shown by using ultrasonic wave acoustic properties and the soft computing, such as the self-organizing neural network and the distance-type fuzzy reasoning method.

  13. Perspective: Sloppiness and emergent theories in physics, biology, and beyond.

    PubMed

    Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P

    2015-07-07

    Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.

  14. A Bayesian Framework for False Belief Reasoning in Children: A Rational Integration of Theory-Theory and Simulation Theory

    PubMed Central

    Asakura, Nobuhiko; Inui, Toshio

    2016-01-01

    Two apparently contrasting theories have been proposed to account for the development of children's theory of mind (ToM): theory-theory and simulation theory. We present a Bayesian framework that rationally integrates both theories for false belief reasoning. This framework exploits two internal models for predicting the belief states of others: one of self and one of others. These internal models are responsible for simulation-based and theory-based reasoning, respectively. The framework further takes into account empirical studies of a developmental ToM scale (e.g., Wellman and Liu, 2004): developmental progressions of various mental state understandings leading up to false belief understanding. By representing the internal models and their interactions as a causal Bayesian network, we formalize the model of children's false belief reasoning as probabilistic computations on the Bayesian network. This model probabilistically weighs and combines the two internal models and predicts children's false belief ability as a multiplicative effect of their early-developed abilities to understand the mental concepts of diverse beliefs and knowledge access. Specifically, the model predicts that children's proportion of correct responses on a false belief task can be closely approximated as the product of their proportions correct on the diverse belief and knowledge access tasks. To validate this prediction, we illustrate that our model provides good fits to a variety of ToM scale data for preschool children. We discuss the implications and extensions of our model for a deeper understanding of developmental progressions of children's ToM abilities. PMID:28082941

  15. A Bayesian Framework for False Belief Reasoning in Children: A Rational Integration of Theory-Theory and Simulation Theory.

    PubMed

    Asakura, Nobuhiko; Inui, Toshio

    2016-01-01

    Two apparently contrasting theories have been proposed to account for the development of children's theory of mind (ToM): theory-theory and simulation theory. We present a Bayesian framework that rationally integrates both theories for false belief reasoning. This framework exploits two internal models for predicting the belief states of others: one of self and one of others. These internal models are responsible for simulation-based and theory-based reasoning, respectively. The framework further takes into account empirical studies of a developmental ToM scale (e.g., Wellman and Liu, 2004): developmental progressions of various mental state understandings leading up to false belief understanding. By representing the internal models and their interactions as a causal Bayesian network, we formalize the model of children's false belief reasoning as probabilistic computations on the Bayesian network. This model probabilistically weighs and combines the two internal models and predicts children's false belief ability as a multiplicative effect of their early-developed abilities to understand the mental concepts of diverse beliefs and knowledge access. Specifically, the model predicts that children's proportion of correct responses on a false belief task can be closely approximated as the product of their proportions correct on the diverse belief and knowledge access tasks. To validate this prediction, we illustrate that our model provides good fits to a variety of ToM scale data for preschool children. We discuss the implications and extensions of our model for a deeper understanding of developmental progressions of children's ToM abilities.

  16. Reasons for and against receiving influenza vaccination in a working age population in Japan: a national cross-sectional study.

    PubMed

    Iwasa, Tsubasa; Wada, Koji

    2013-07-12

    To improve influenza vaccination coverage in the working age population, it is necessary to understand the current status and awareness of influenza vaccination. This study aimed to determine influenza vaccination coverage in Japan and reasons for receiving the vaccine or not. An anonymous internet-based survey was performed in September 2011. Our target study size was 3,000 participants between 20 and 69 years of age, with approximately 300 men and 300 women in each of five age groups (20-29, 30-39, 40-49, 50-59, and 60-69). We asked the history of influenza vaccine uptake in the previous year, and reasons for having vaccination or not. There were 3,129 respondents, of whom 24.2% of males and 27.6% of females received influenza vaccination between October 2010 and March 2011. Among those who were vaccinated, the main reasons for receiving the influenza vaccine were "Wanted to avoid becoming infected with influenza virus" (males: 84.0%; females: 82.6%) and "Even if infected with influenza, wanted to prevent the symptoms from becoming serious" (males: 60.7%; females: 66.4%). Among those not vaccinated, the most frequent reasons for not receiving the influenza vaccine included "No time to visit a medical institution" (males: 32.0%; females: 22.4%) and "Unlikely to become infected with influenza" (males: 25.1%; females: 22.7%). The reasons for receiving the influenza vaccine varied between age groups and between sexes. To heighten awareness of influenza vaccination among unvaccinated working age participants, different intervention approaches according to sex and age group may be necessary.

  17. Transport of phase space densities through tetrahedral meshes using discrete flow mapping

    NASA Astrophysics Data System (ADS)

    Bajars, Janis; Chappell, David J.; Søndergaard, Niels; Tanner, Gregor

    2017-01-01

    Discrete flow mapping was recently introduced as an efficient ray based method determining wave energy distributions in complex built up structures. Wave energy densities are transported along ray trajectories through polygonal mesh elements using a finite dimensional approximation of a ray transfer operator. In this way the method can be viewed as a smoothed ray tracing method defined over meshed surfaces. Many applications require the resolution of wave energy distributions in three-dimensional domains, such as in room acoustics, underwater acoustics and for electromagnetic cavity problems. In this work we extend discrete flow mapping to three-dimensional domains by propagating wave energy densities through tetrahedral meshes. The geometric simplicity of the tetrahedral mesh elements is utilised to efficiently compute the ray transfer operator using a mixture of analytic and spectrally accurate numerical integration. The important issue of how to choose a suitable basis approximation in phase space whilst maintaining a reasonable computational cost is addressed via low order local approximations on tetrahedral faces in the position coordinate and high order orthogonal polynomial expansions in momentum space.

  18. Unified connected theory of few-body reaction mechanisms in N-body scattering theory

    NASA Technical Reports Server (NTRS)

    Polyzou, W. N.; Redish, E. F.

    1978-01-01

    A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.

  19. Sequence-based protein superfamily classification using computational intelligence techniques: a review.

    PubMed

    Vipsita, Swati; Rath, Santanu Kumar

    2015-01-01

    Protein superfamily classification deals with the problem of predicting the family membership of newly discovered amino acid sequence. Although many trivial alignment methods are already developed by previous researchers, but the present trend demands the application of computational intelligent techniques. As there is an exponential growth in size of biological database, retrieval and inference of essential knowledge in the biological domain become a very cumbersome task. This problem can be easily handled using intelligent techniques due to their ability of tolerance for imprecision, uncertainty, approximate reasoning, and partial truth. This paper discusses the various global and local features extracted from full length protein sequence which are used for the approximation and generalisation of the classifier. The various parameters used for evaluating the performance of the classifiers are also discussed. Therefore, this review article can show right directions to the present researchers to make an improvement over the existing methods.

  20. Iterative CT reconstruction using coordinate descent with ordered subsets of data

    NASA Astrophysics Data System (ADS)

    Noo, F.; Hahn, K.; Schöndube, H.; Stierstorfer, K.

    2016-04-01

    Image reconstruction based on iterative minimization of a penalized weighted least-square criteria has become an important topic of research in X-ray computed tomography. This topic is motivated by increasing evidence that such a formalism may enable a significant reduction in dose imparted to the patient while maintaining or improving image quality. One important issue associated with this iterative image reconstruction concept is slow convergence and the associated computational effort. For this reason, there is interest in finding methods that produce approximate versions of the targeted image with a small number of iterations and an acceptable level of discrepancy. We introduce here a novel method to produce such approximations: ordered subsets in combination with iterative coordinate descent. Preliminary results demonstrate that this method can produce, within 10 iterations and using only a constant image as initial condition, satisfactory reconstructions that retain the noise properties of the targeted image.

  1. Potential of nitrogen gas (n2) flushing to extend the shelf life of cold stored pasteurised milk.

    PubMed

    Munsch-Alatossava, Patricia; Ghafar, Abdul; Alatossava, Tapani

    2013-03-11

    For different reasons, the amount of food loss for developing and developed countries is approximately equivalent. Altogether, these losses represent approximately 1/3 of the global food production. Significant amounts of pasteurised milk are lost due to bad smell and unpleasant taste. Currently, even under the best cold chain conditions, psychrotolerant spore-forming bacteria, some of which also harbour virulent factors, limit the shelf life of pasteurised milk. N2 gas-based flushing has recently been of interest for improving the quality of raw milk. Here, we evaluated the possibility of addressing bacterial growth in pasteurised milk during cold storage at 6 °C and 8 °C. Clearly, the treatments hindered bacterial growth, in a laboratory setting, when N2-treated milk were compared to the corresponding controls, which suggests that N2-flushing treatment constitutes a promising option to extend the shelf life of pasteurised milk.

  2. A method based on the Jacobi tau approximation for solving multi-term time-space fractional partial differential equations

    NASA Astrophysics Data System (ADS)

    Bhrawy, A. H.; Zaky, M. A.

    2015-01-01

    In this paper, we propose and analyze an efficient operational formulation of spectral tau method for multi-term time-space fractional differential equation with Dirichlet boundary conditions. The shifted Jacobi operational matrices of Riemann-Liouville fractional integral, left-sided and right-sided Caputo fractional derivatives are presented. By using these operational matrices, we propose a shifted Jacobi tau method for both temporal and spatial discretizations, which allows us to present an efficient spectral method for solving such problem. Furthermore, the error is estimated and the proposed method has reasonable convergence rates in spatial and temporal discretizations. In addition, some known spectral tau approximations can be derived as special cases from our algorithm if we suitably choose the corresponding special cases of Jacobi parameters θ and ϑ. Finally, in order to demonstrate its accuracy, we compare our method with those reported in the literature.

  3. Development of a polysilicon process based on chemical vapor deposition, phase 1 and phase 2

    NASA Technical Reports Server (NTRS)

    Plahutnik, F.; Arvidson, A.; Sawyer, D.; Sharp, K.

    1982-01-01

    High-purity polycrystalline silicon was produced in an experimental, intermediate and advanced CVD reactor. Data from the intermediate and advanced reactors confirmed earlier results obtained in the experimental reactor. Solar cells were fabricated by Westinghouse Electric and Applied Solar Research Corporation which met or exceeded baseline cell efficiencies. Feedstocks containing trichlorosilane or silicon tetrachloride are not viable as etch promoters to reduce silicon deposition on bell jars. Neither are they capable of meeting program goals for the 1000 MT/yr plant. Post-run CH1 etch was found to be a reasonably effective method of reducing silicon deposition on bell jars. Using dichlorosilane as feedstock met the low-cost solar array deposition goal (2.0 gh-1-cm-1), however, conversion efficiency was approximately 10% lower than the targeted value of 40 mole percent (32 to 36% achieved), and power consumption was approximately 20 kWh/kg over target at the reactor.

  4. Time-dependent Hartree-Fock approach to nuclear ``pasta'' at finite temperature

    NASA Astrophysics Data System (ADS)

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.

    2013-05-01

    We present simulations of neutron-rich matter at subnuclear densities, like supernova matter, with the time-dependent Hartree-Fock approximation at temperatures of several MeV. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. This matter evolves into spherical, rod-like, and slab-like shapes and mixtures thereof. The simulations employ a full Skyrme interaction in a periodic three-dimensional grid. By an improved morphological analysis based on Minkowski functionals, all eight pasta shapes can be uniquely identified by the sign of only two valuations, namely the Euler characteristic and the integral mean curvature. In addition, we propose the variance in the cell density distribution as a measure to distinguish pasta matter from uniform matter.

  5. Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam

    2009-01-01

    This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.

  6. Potential of Nitrogen Gas (N2) Flushing to Extend the Shelf Life of Cold Stored Pasteurised Milk

    PubMed Central

    Munsch-Alatossava, Patricia; Ghafar, Abdul; Alatossava, Tapani

    2013-01-01

    For different reasons, the amount of food loss for developing and developed countries is approximately equivalent. Altogether, these losses represent approximately 1/3 of the global food production. Significant amounts of pasteurised milk are lost due to bad smell and unpleasant taste. Currently, even under the best cold chain conditions, psychrotolerant spore-forming bacteria, some of which also harbour virulent factors, limit the shelf life of pasteurised milk. N2 gas-based flushing has recently been of interest for improving the quality of raw milk. Here, we evaluated the possibility of addressing bacterial growth in pasteurised milk during cold storage at 6 °C and 8 °C. Clearly, the treatments hindered bacterial growth, in a laboratory setting, when N2-treated milk were compared to the corresponding controls, which suggests that N2-flushing treatment constitutes a promising option to extend the shelf life of pasteurised milk. PMID:23478439

  7. Atmospheric solar heating rate in the water vapor bands

    NASA Technical Reports Server (NTRS)

    Chou, Ming-Dah

    1986-01-01

    The total absorption of solar radiation by water vapor in clear atmospheres is parameterized as a simple function of the scaled water vapor amount. For applications to cloudy and hazy atmospheres, the flux-weighted k-distribution functions are computed for individual absorption bands and for the total near-infrared region. The parameterization is based upon monochromatic calculations and follows essentially the scaling approximation of Chou and Arking, but the effect of temperature variation with height is taken into account in order to enhance the accuracy. Furthermore, the spectral range is extended to cover the two weak bands centered at 0.72 and 0.82 micron. Comparisons with monochromatic calculations show that the atmospheric heating rate and the surface radiation can be accurately computed from the parameterization. Comparisons are also made with other parameterizations. It is found that the absorption of solar radiation can be computed reasonably well using the Goody band model and the Curtis-Godson approximation.

  8. Mean-field approximation for spacing distribution functions in classical systems

    NASA Astrophysics Data System (ADS)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.

  9. Aqueous corrosion and corrosion-sensitive embrittlement of Fe{sub 3}Al-based and lean-aluminum iron aluminides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J.G.; Buchanan, R.A.

    Aqueous corrosion and corrosion-sensitive embrittlement of iron aluminides were characterized as functions of environment, alloying content, notch sensitivity, and strain rate. Polarization resistance and cyclic anodic polarization evaluations were performed in 3.5 wt % NaCl, 200 ppM Cl{sup {minus}} (pH = 4), and 1 N NaOH solutions. In the mild acid-chloride solution [200 ppM Cl{sup {minus}} (pH = 4)], the pitting-corrosion resistance of the new lean-aluminum iron aluminides (FAP-Y and CM-Mo) was comparable to that of the Fe{sub 3}Al-based FAL-Mo. In the higher-chloride 3.5 wt % NaCl, the resistance of CM-Mo was slightly less but FAP-Y showed quite similar behaviormore » to FAL-Mo. In 1 N NaOH solution, all materials exhibited ideal passive behavior. Under slow-strain-rate test conditions in the mild acid-chloride electrolyte, prior work had shown the ductilities (% elongations) of Fe{sub 3}Al-based materials to be {approximately}7% and {approximately}1% at the freely-corroding and hydrogen-charging potentials, respectively. Present studied on the lean-aluminum materials have shown the ductilities to be {approximately}17% and {approximately}5%, respectively. Thus, the present results indicate that these new materials have reasonably-good aqueous-corrosion properties in chloride environments and significantly-enhanced ductilities under aqueous corrosion conditions. The strain rate and notch sensitivities of high-aluminum iron aluminide (FA-129) were investigated by performing slow-strain-rate tests. The notch sensitivity was independent of strain rate and the notch sensitivity in the aqueous environment was similar to that in air.« less

  10. DABI: A data base for image analysis with nondeterministic inference capability

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.; Cunningham, R.

    1976-01-01

    A description is given of the data base used in the perception subsystem of the Mars robot vehicle prototype being implemented at the Jet Propulsion Laboratory. This data base contains two types of information. The first is generic (uninstantiated, abstract) information that specifies the general rules of perception of objects in the expected environments. The second kind of information is a specific (instantiated) description of a structure, i.e., the properties and relations of objects in the specific case being analyzed. The generic knowledge can be used by the approximate reasoning subsystem to obtain information on the specific structures which is not directly measurable by the sensory instruments. Raw measurements are input either from the sensory instruments or a human operator using a CRT or a TTY.

  11. Depressive Symptoms in a Sample of Social Work Students and Reasons Preventing Students from Using Mental Health Services: An Exploratory Study

    ERIC Educational Resources Information Center

    Ting, Laura

    2011-01-01

    Limited research exists on social work students' level of depression and help-seeking beliefs. This study empirically examined the rates of depression among 215 BSW students and explored students' reasons for not using mental health services. Approximately 50% scored at or above the Center for Epidemiologic Studies Depression Scale cutoff;…

  12. Increasing the open-circuit voltage in high-performance organic photovoltaic devices through conformational twisting of an indacenodithiophene-based conjugated polymer.

    PubMed

    Chen, Chih-Ping; Hsu, Hsiang-Lin

    2013-10-01

    A fused ladder indacenodithiophene (IDT)-based donor-acceptor (D-A)-type alternating conjugated polymer, PIDTHT-BT, presenting n-hexylthiophene conjugated side chains is prepared. By extending the degree of intramolecular repulsion through the conjugated side chain moieties, an energy level for the highest occupied molecular orbital (HOMO) of -5.46 eV--a value approximately 0.27 eV lower than that of its counterpart PIDTDT-BT--is obtained, subsequently providing a fabricated solar cell with a high open-circuit voltage of approximately 0.947 V. The hole mobility (determined using the space charge-limited current model) in a blend film containing 20 wt% PIDTHT-BT) and 80 wt% [6,6]-phenyl-C71 butyric acid methyl ester (PC71 BM) is 2.2 × 10(-9) m(2) V(-1) s(-1), which is within the range of reasonable values for applications in organic photovoltaics. The power conversion efficiency is 4.5% under simulated solar illumination (AM 1.5G, 100 mW cm(-2)). © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Coupled-cluster based approach for core-level states in condensed phase: Theory and application to different protonated forms of aqueous glycine

    DOE PAGES

    Sadybekov, Arman; Krylov, Anna I.

    2017-07-07

    A theoretical approach for calculating core-level states in condensed phase is presented. The approach is based on equation-of-motion coupled-cluster theory (EOMCC) and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we address poor convergence issues that are encountered for the core-level states and significantly reduce computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to reference systems, are reproduced reasonably well. By using different protonation formsmore » of solvated glycine as a benchmark system, we show that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy.« less

  14. Efficient Posterior Probability Mapping Using Savage-Dickey Ratios

    PubMed Central

    Penny, William D.; Ridgway, Gerard R.

    2013-01-01

    Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640

  15. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less

  16. Stabilized FE simulation of prototype thermal-hydraulics problems with integrated adjoint-based capabilities

    DOE PAGES

    Shadid, J. N.; Smith, T. M.; Cyr, E. C.; ...

    2016-05-20

    A critical aspect of applying modern computational solution methods to complex multiphysics systems of relevance to nuclear reactor modeling, is the assessment of the predictive capability of specific proposed mathematical models. The understanding of numerical error, the sensitivity of the solution to parameters associated with input data, boundary condition uncertainty, and mathematical models is critical. Additionally, the ability to evaluate and or approximate the model efficiently, to allow development of a reasonable level of statistical diagnostics of the mathematical model and the physical system, is of central importance. In our study we report on initial efforts to apply integrated adjoint-basedmore » computational analysis and automatic differentiation tools to begin to address these issues. The study is carried out in the context of a Reynolds averaged Navier–Stokes approximation to turbulent fluid flow and heat transfer using a particular spatial discretization based on implicit fully-coupled stabilized FE methods. We present the initial results that show the promise of these computational techniques in the context of nuclear reactor relevant prototype thermal-hydraulics problems.« less

  17. Coupled-cluster based approach for core-level states in condensed phase: Theory and application to different protonated forms of aqueous glycine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadybekov, Arman; Krylov, Anna I.

    A theoretical approach for calculating core-level states in condensed phase is presented. The approach is based on equation-of-motion coupled-cluster theory (EOMCC) and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we address poor convergence issues that are encountered for the core-level states and significantly reduce computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to reference systems, are reproduced reasonably well. By using different protonation formsmore » of solvated glycine as a benchmark system, we show that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy.« less

  18. COBE DMR-normalized open inflation cold dark matter cosmogony

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Ratra, Bharat; Sugiyama, Naoshi; Banday, Anthony J.

    1995-01-01

    A cut-sky orthogonal mode analysis of the 2 year COBE DMR 53 and 90 GHz sky maps (in Galactic coordinates) is used to determine the normalization of an open inflation model based on the cold dark matter (CDM) scenario. The normalized model is compared to measures of large-scale structure in the universe. Although the DMR data alone does not provide sufficient discriminative power to prefer a particular value of the mass density parameter, the open model appears to be reasonably consistent with observations when Omega(sub 0) is approximately 0.3-0.4 and merits further study.

  19. Refining fuzzy logic controllers with machine learning

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1994-01-01

    In this paper, we describe the GARIC (Generalized Approximate Reasoning-Based Intelligent Control) architecture, which learns from its past performance and modifies the labels in the fuzzy rules to improve performance. It uses fuzzy reinforcement learning which is a hybrid method of fuzzy logic and reinforcement learning. This technology can simplify and automate the application of fuzzy logic control to a variety of systems. GARIC has been applied in simulation studies of the Space Shuttle rendezvous and docking experiments. It has the potential of being applied in other aerospace systems as well as in consumer products such as appliances, cameras, and cars.

  20. ``Glue" approximation for the pairing interaction in the Hubbard model with next nearest neighbor hopping

    NASA Astrophysics Data System (ADS)

    Khatami, Ehsan; Macridin, Alexandru; Jarrell, Mark

    2008-03-01

    Recently, several authors have employed the ``glue" approximation for the Cuprates in which the full pairing vertex is approximated by the spin susceptibility. We study this approximation using Quantum Monte Carlo Dynamical Cluster Approximation methods on a 2D Hubbard model. By considering a reasonable finite value for the next nearest neighbor hopping, we find that this ``glue" approximation, in the current form, does not capture the correct pairing symmetry. Here, d-wave is not the leading pairing symmetry while it is the dominant symmetry using the ``exact" QMC results. We argue that the sensitivity of this approximation to the band structure changes leads to this inconsistency and that this form of interaction may not be the appropriate description of the pairing mechanism in Cuprates. We suggest improvements to this approximation which help to capture the the essential features of the QMC data.

  1. Integrated Application of Active Controls (IAAC) technology to an advanced subsonic transport project: Program review

    NASA Technical Reports Server (NTRS)

    1986-01-01

    This report summarizes the Integrated Application of Active Controls (IAAC) Technology to an Advanced Subsonic Transport Project, established as one element of the NASA/Boeing Energy Efficient Transport Technology Program. The performance assessment showed that incorporating ACT into an airplane designed to fly approximately 200 passengers approximately 2,000 nmi could yield block fuel savings from 6 to 10 percent at the design range. The principal risks associated with incorporating these active control functions into a commercial airplane are those involved with the ACT system implementation. The Test and Evaluation phase of the IAAC Project focused on the design, fabrication, and test of a system that implemented pitch axis fly-by-wire, pitch axis augmentation, and wing load alleviation. The system was built to be flight worthy, and was planned to be experimentally flown on the 757. The system was installed in the Boeing Digital Avionics Flight Controls Laboratory (DAFCL), where open loop hardware and software tests, and a brief examination of a direct drive valve (DDV) actuation concept were accomplished. The IAAC Project has shown that ACT can be beneficially incorporated into a commercial transport airplane. Based on the results achieved during the testing phase, there appears to be no fundamental reason(s) that would preclude the commercial application of ACT, assuming an appropriate development effort is included.

  2. Mean-field approximation for spacing distribution functions in classical systems.

    PubMed

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T L

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p((n))(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p((n))(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed. © 2012 American Physical Society

  3. Vapor-phase transport of trichloroethene in an intermediate-scale vadose-zone system: retention processes and tracer-based prediction.

    PubMed

    Costanza-Robinson, Molly S; Carlson, Tyson D; Brusseau, Mark L

    2013-02-01

    Gas-phase transport experiments were conducted using a large weighing lysimeter to evaluate retention processes for volatile organic compounds (VOCs) in water-unsaturated (vadose-zone) systems, and to test the utility of gas-phase tracers for predicting VOC retardation. Trichloroethene (TCE) served as a model VOC, while trichlorofluoromethane (CFM) and heptane were used as partitioning tracers to independently characterize retention by water and the air-water interface, respectively. Retardation factors for TCE ranged between 1.9 and 3.5, depending on water content. The results indicate that dissolution into the bulk water was the primary retention mechanism for TCE under all conditions studied, contributing approximately two-thirds of the total measured retention. Accumulation at the air-water interface comprised a significant fraction of the observed retention for all experiments, with an average contribution of approximately 24%. Sorption to the solid phase contributed approximately 10% to retention. Water contents and air-water interfacial areas estimated based on the CFM and heptane tracer data, respectively, were similar to independently measured values. Retardation factors for TCE predicted using the partitioning-tracer data were in reasonable agreement with the measured values. These results suggest that gas-phase tracer tests hold promise for characterizing the retention and transport of VOCs in the vadose-zone. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Learning deep similarity in fundus photography

    NASA Astrophysics Data System (ADS)

    Chudzik, Piotr; Al-Diri, Bashir; Caliva, Francesco; Ometto, Giovanni; Hunter, Andrew

    2017-02-01

    Similarity learning is one of the most fundamental tasks in image analysis. The ability to extract similar images in the medical domain as part of content-based image retrieval (CBIR) systems has been researched for many years. The vast majority of methods used in CBIR systems are based on hand-crafted feature descriptors. The approximation of a similarity mapping for medical images is difficult due to the big variety of pixel-level structures of interest. In fundus photography (FP) analysis, a subtle difference in e.g. lesions and vessels shape and size can result in a different diagnosis. In this work, we demonstrated how to learn a similarity function for image patches derived directly from FP image data without the need of manually designed feature descriptors. We used a convolutional neural network (CNN) with a novel architecture adapted for similarity learning to accomplish this task. Furthermore, we explored and studied multiple CNN architectures. We show that our method can approximate the similarity between FP patches more efficiently and accurately than the state-of- the-art feature descriptors, including SIFT and SURF using a publicly available dataset. Finally, we observe that our approach, which is purely data-driven, learns that features such as vessels calibre and orientation are important discriminative factors, which resembles the way how humans reason about similarity. To the best of authors knowledge, this is the first attempt to approximate a visual similarity mapping in FP.

  5. Temporal Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Thomas, B. C.

    2004-01-01

    In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.

  6. The influence of intention, outcome and question-wording on children's and adults' moral judgments.

    PubMed

    Nobes, Gavin; Panagiotaki, Georgia; Bartholomew, Kimberley J

    2016-12-01

    The influence of intention and outcome information on moral judgments was investigated by telling children aged 4-8yearsandadults (N=169) stories involving accidental harms (positive intention, negative outcome) or attempted harms (negative intention, positive outcome) from two studies (Helwig, Zelazo, & Wilson, 2001; Zelazo, Helwig, & Lau, 1996). When the original acceptability (wrongness) question was asked, the original findings were closely replicated: children's and adults' acceptability judgments were based almost exclusively on outcome, and children's punishment judgments were also primarily outcome-based. However, when this question was rephrased, 4-5-year-olds' judgments were approximately equally influenced by intention and outcome, and from 5-6years they were based considerably more on intention than outcome. These findings indicate that, for methodological reasons, children's (and adults') ability to make intention-based judgment has often been substantially underestimated. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Probability theory, not the very guide of life.

    PubMed

    Juslin, Peter; Nilsson, Håkan; Winman, Anders

    2009-10-01

    Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality.

  8. Power law-based local search in spider monkey optimisation for lower order system modelling

    NASA Astrophysics Data System (ADS)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  9. Truth-Valued-Flow Inference (TVFI) and its applications in approximate reasoning

    NASA Technical Reports Server (NTRS)

    Wang, Pei-Zhuang; Zhang, Hongmin; Xu, Wei

    1993-01-01

    The framework of the theory of Truth-valued-flow Inference (TVFI) is introduced. Even though there are dozens of papers presented on fuzzy reasoning, we think it is still needed to explore a rather unified fuzzy reasoning theory which has the following two features: (1) it is simplified enough to be executed feasibly and easily; and (2) it is well structural and well consistent enough that it can be built into a strict mathematical theory and is consistent with the theory proposed by L.A. Zadeh. TVFI is one of the fuzzy reasoning theories that satisfies the above two features. It presents inference by the form of networks, and naturally views inference as a process of truth values flowing among propositions.

  10. Comparison of approximations in density functional theory calculations: Energetics and structure of binary oxides

    NASA Astrophysics Data System (ADS)

    Hinuma, Yoyo; Hayashi, Hiroyuki; Kumagai, Yu; Tanaka, Isao; Oba, Fumiyasu

    2017-09-01

    High-throughput first-principles calculations based on density functional theory (DFT) are a powerful tool in data-oriented materials research. The choice of approximation to the exchange-correlation functional is crucial as it strongly affects the accuracy of DFT calculations. This study compares performance of seven approximations, six of which are based on Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) with and without Hubbard U and van der Waals corrections (PBE, PBE+U, PBED3, PBED3+U, PBEsol, and PBEsol+U), and the strongly constrained and appropriately normed (SCAN) meta-GGA on the energetics and crystal structure of elementary substances and binary oxides. For the latter, only those with closed-shell electronic structures are considered, examples of which include C u2O , A g2O , MgO, ZnO, CdO, SnO, PbO, A l2O3 , G a2O3 , I n2O3 , L a2O3 , B i2O3 , Si O2 , Sn O2 , Pb O2 , Ti O2 , Zr O2 , Hf O2 , V2O5 , N b2O5 , T a2O5 , Mo O3 , and W O3 . Prototype crystal structures are selected from the Inorganic Crystal Structure Database (ICSD) and cation substitution is used to make a set of existing and hypothetical oxides. Two indices are proposed to quantify the extent of lattice and internal coordinate relaxation during a calculation. The former is based on the second invariant and determinant of the transformation matrix of basis vectors from before relaxation to after relaxation, and the latter is derived from shifts of internal coordinates of atoms in the unit cell. PBED3, PBEsol, and SCAN reproduce experimental lattice parameters of elementary substances and oxides well with few outliers. Notably, PBEsol and SCAN predict the lattice parameters of low dimensional structures comparably well with PBED3, even though these two functionals do not explicitly treat van der Waals interactions. SCAN gives formation enthalpies and Gibbs free energies closest to experimental data, with mean errors (MEs) of 0.01 and -0.04 eV, respectively, and root-mean-square errors (RMSEs) are both 0.07 eV. In contrast, all GGAs including those with Hubbard U and van der Waals corrections give 0.1 to 0.2 eV MEs and at least 0.11 eV RMSEs. Phonon contributions of solid phases to the formation enthalpies and Gibbs free energies are estimated to be small at less than ˜0.1 eV/atom within the quasiharmonic approximation. The same crystal structure appears as the lowest energy polymorph with different approximations in most of the investigated binary oxides. However, there are some systems where the choice of approximation significantly affects energy differences between polymorphs, or even the order of stability between phases. SCAN is the most reasonable regarding relative energies between polymorphs. The calculated transition pressure between polymorphs of ZnO and Sn O2 is closest to experimental values when PBED3, PBEsol (also PBED3+U and PBEsol+U for ZnO), and SCAN are employed. In summary, SCAN appears to be the best choice among the seven approximations based on the analysis of the energetics and crystal structure of binary oxides, while PBEsol is the best among the GGAs considered and shows a comparably good performance with SCAN for many cases. The use of PBEsol+U alongside PBEsol is also a reasonable choice, given that U corrections are required for several materials to qualitatively reproduce their electronic structures.

  11. Wakes and differential charging of large bodies in low Earth orbit

    NASA Technical Reports Server (NTRS)

    Parker, L. W.

    1985-01-01

    Highlights of earlier results using the Inside-Out WAKE code on wake structures of LEO spacecraft are reviewed. For conducting bodies of radius large compared with the Debye length, a high Mach number wake develops a negative potential well. Quasineutrality is violated in the very near wake region, and the wake is relatively empty for a distance downstream of about one half of a Mach number of radii. There is also a suggestion of a core of high density along the axis. A comparison of rigorous numerical solutions with in situ wake data from the AE-C satellite suggests that the so called neutral approximation for ions (straight line trajectories, independent of fields) may be a reasonable approximation except near the center of the near wake. This approximation is adopted for very large bodies. Work concerned with the wake point potential of very large nonconducting bodies such as the shuttle orbiter is described. Using a cylindrical model for bodies of this size or larger in LEO (body radius up to 10 to the 5th power Debye lengths), approximate solutions are presented based on the neutral approximation (but with rigorous trajectory calculations for surface current balance). There is a negative potential well if the body is conducting, and no well if the body is nonconducting. In the latter case the wake surface itself becomes highly negative. The wake point potential is governed by the ion drift energy.

  12. Simulations of sooting turbulent jet flames using a hybrid flamelet/stochastic Eulerian field method

    NASA Astrophysics Data System (ADS)

    Consalvi, Jean-Louis; Nmira, Fatiha; Burot, Daria

    2016-03-01

    The stochastic Eulerian field method is applied to simulate 12 turbulent C1-C3 hydrocarbon jet diffusion flames covering a wide range of Reynolds numbers and fuel sooting propensities. The joint scalar probability density function (PDF) is a function of the mixture fraction, enthalpy defect, scalar dissipation rate and representative soot properties. Soot production is modelled by a semi-empirical acetylene/benzene-based soot model. Spectral gas and soot radiation is modelled using a wide-band correlated-k model. Emission turbulent radiation interactions (TRIs) are taken into account by means of the PDF method, whereas absorption TRIs are modelled using the optically thin fluctuation approximation. Model predictions are found to be in reasonable agreement with experimental data in terms of flame structure, soot quantities and radiative loss. Mean soot volume fractions are predicted within a factor of two of the experiments whereas radiant fractions and peaks of wall radiative fluxes are within 20%. The study also aims to assess approximate radiative models, namely the optically thin approximation (OTA) and grey medium approximation. These approximations affect significantly the radiative loss and should be avoided if accurate predictions of the radiative flux are desired. At atmospheric pressure, the relative errors that they produced on the peaks of temperature and soot volume fraction are within both experimental and model uncertainties. However, these discrepancies are found to increase with pressure, suggesting that spectral models describing properly the self-absorption should be considered at over-atmospheric pressure.

  13. Ethical and legal issues in organ transplantation: Indian scenario.

    PubMed

    Mathiharan, Karunakaran

    2011-07-01

    In 1994, the Government of India enacted the Transplantation of Human Organs Act (THOA) to prevent commercial dealings in human organs. However, a greater number of scandals involving medical practitioners and others in the kidney trade has surfaced periodically in every state in India. The present regulatory system has failed mainly due to the misuse of Section 9(3) of the THOA, which approves the consent given by a live unrelated donor for the removal of organs for the reason of affection or attachment towards the recipient or for any other special reason. Currently in India, approximately 3500-4000 kidney transplants and 150-200 liver transplants are performed annually. However, the availability of organs from brain-dead persons is very low. As a result, live related or unrelated donors form the main source of organ transplantation. Therefore, physicians and policy-makers should re-examine the value of introducing regulated incentive-based organ donation to increase the supply of organs for transplantation and to end unlawful financial transaction.

  14. 29 CFR 778.217 - Reimbursement for expenses.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (2) The actual or reasonably approximate amount expended by an employee in purchasing, laundering or... expenses, such as taxicab fares, incurred while traveling on the employer's business. (4) “Supper money”, a...

  15. 100-NR-2 Apatite Treatability Test: Fall 2010 Tracer Infiltration Test (White Paper)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vermeul, Vincent R.; Fritz, Brad G.; Fruchter, Jonathan S.

    The primary objectives of the tracer infiltration test were to 1) determine whether field-scale hydraulic properties for the compacted roadbed materials and underlying Hanford fm. sediments comprising the zone of water table fluctuation beneath the site are consistent with estimates based laboratory-scale measurements on core samples and 2) characterize wetting front advancement and distribution of soil moisture achieved for the selected application rate. These primary objectives were met. The test successfully demonstrated that 1) the remaining 2 to 3 ft of compacted roadbed material below the infiltration gallery does not limit infiltration rates to levels that would be expected tomore » eliminate near surface application as a viable amendment delivery approach and 2) the combined aqueous and geophysical monitoring approaches employed at this site, with some operational adjustments based on lessons learned, provides an effective means of assessing wetting front advancement and the distribution of soil moisture achieved for a given solution application. Reasonably good agreement between predicted and observed tracer and moisture front advancement rates was observed. During the first tracer infiltration test, which used a solution application rate of 0.7 cm/hr, tracer arrivals were observed at the water table (10 to 12 ft below the bottom of the infiltration gallery) after approximately 5 days, for an advancement rate of approximately 2 ft/day. This advancement rate is generally consistent with pre-test modeling results that predicted tracer arrival at the water table after approximately 5 days (see Figure 8, bottom left panel). This agreement indicates that hydraulic property values specified in the model for the compacted roadbed materials and underlying Hanford formation sediments, which were based on laboratory-scale measurements, are reasonable estimates of actual field-scale conditions. Additional work is needed to develop a working relationship between resistivity change and the associated change in moisture content so that 4D images of moisture content change can be generated. Results from this field test will be available for any future Ca-citrate-PO4 amendment infiltration tests, which would be designed to evaluate the efficacy of using near surface application of amendments to form apatite mineral phases in the upper portion of the zone of water table fluctuation.« less

  16. Simple many-body based screening mixing ansatz for improvement of G W /Bethe-Salpeter equation excitation energies of molecular systems

    NASA Astrophysics Data System (ADS)

    Ziaei, Vafa; Bredow, Thomas

    2017-11-01

    We propose a simple many-body based screening mixing strategy to considerably enhance the performance of the Bethe-Salpeter equation (BSE) approach for prediction of excitation energies of molecular systems. This strategy enables us to closely reproduce results of highly correlated equation of motion coupled cluster singles and doubles (EOM-CCSD) through optimal use of cancellation effects. We start from the Hartree-Fock (HF) reference state and take advantage of local density approximation (LDA) based random phase approximation (RPA) screening, denoted as W0-RPA@LDA with W0 as the dynamically screened interaction built upon LDA wave functions and energies. We further use this W0-RPA@LDA screening as an initial screening guess for calculation of quasiparticle energies in the framework of G0W0 @HF. The W0-RPA@LDA screening is further injected into the BSE. By applying such an approach on a set of 22 molecules for which the traditional G W /BSE approaches fail, we observe good agreement with respect to EOM-CCSD references. The reason for the observed good accuracy of this mixing ansatz (scheme A) lies in an optimal damping of HF exchange effect through the W0-RPA@LDA strong screening, leading to substantial decrease of typically overestimated HF electronic gap, and hence to better excitation energies. Further, we present a second multiscreening ansatz (scheme B), which is similar to scheme A with the exception that now the W0-RPA@HF screening is used in the BSE in order to further improve the overestimated excitation energies of carbonyl sulfide (COS) and disilane (Si2H6 ). The reason for improvement of the excitation energies in scheme B lies in the fact that W0-RPA@HF screening is less effective (and weaker than W0-RPA@LDA), which gives rise to stronger electron-hole effects in the BSE.

  17. Accounting for Location Error in Kalman Filters: Integrating Animal Borne Sensor Data into Assimilation Schemes

    PubMed Central

    Sengupta, Aritra; Foster, Scott D.; Patterson, Toby A.; Bravington, Mark

    2012-01-01

    Data assimilation is a crucial aspect of modern oceanography. It allows the future forecasting and backward smoothing of ocean state from the noisy observations. Statistical methods are employed to perform these tasks and are often based on or related to the Kalman filter. Typically Kalman filters assumes that the locations associated with observations are known with certainty. This is reasonable for typical oceanographic measurement methods. Recently, however an alternative and abundant source of data comes from the deployment of ocean sensors on marine animals. This source of data has some attractive properties: unlike traditional oceanographic collection platforms, it is relatively cheap to collect, plentiful, has multiple scientific uses and users, and samples areas of the ocean that are often difficult of costly to sample. However, inherent uncertainty in the location of the observations is a barrier to full utilisation of animal-borne sensor data in data-assimilation schemes. In this article we examine this issue and suggest a simple approximation to explicitly incorporate the location uncertainty, while staying in the scope of Kalman-filter-like methods. The approximation stems from a Taylor-series approximation to elements of the updating equation. PMID:22900005

  18. Coarse-grained simulations of cis- and trans-polybutadiene: A bottom-up approach

    NASA Astrophysics Data System (ADS)

    Lemarchand, Claire A.; Couty, Marc; Rousseau, Bernard

    2017-02-01

    We apply the dissipative particle dynamics strategy proposed by Hijón et al. [Faraday Discuss. 144, 301-322 (2010)] and based on an exact derivation of the generalized Langevin equation to cis- and trans-1,4-polybutadiene. We prove that it is able to reproduce not only the structural but also the dynamical properties of these polymers without any fitting parameter. A systematic study of the effect of the level of coarse-graining is done on cis-1,4-polybutadiene. We show that as the level of coarse-graining increases, the dynamical properties are better and better reproduced while the structural properties deviate more and more from those calculated in molecular dynamics (MD) simulations. We suggest two reasons for this behavior: the Markovian approximation is better satisfied as the level of coarse-graining increases, while the pair-wise approximation neglects important contributions due to the relative orientation of the beads at large levels of coarse-graining. Finally, we highlight a possible limit of the Markovian approximation: the fact that in constrained simulations, in which the centers-of-mass of the beads are kept constant, the bead rotational dynamics become extremely slow.

  19. Fast computation of the electrolyte-concentration transfer function of a lithium-ion cell model

    NASA Astrophysics Data System (ADS)

    Rodríguez, Albert; Plett, Gregory L.; Trimboli, M. Scott

    2017-08-01

    One approach to creating physics-based reduced-order models (ROMs) of battery-cell dynamics requires first generating linearized Laplace-domain transfer functions of all cell internal electrochemical variables of interest. Then, the resulting infinite-dimensional transfer functions can be reduced by various means in order to find an approximate low-dimensional model. These methods include Padé approximation or the Discrete-Time Realization algorithm. In a previous article, Lee and colleagues developed a transfer function of the electrolyte concentration for a porous-electrode pseudo-two-dimensional lithium-ion cell model. Their approach used separation of variables and Sturm-Liouville theory to compute an infinite-series solution to the transfer function, which they then truncated to a finite number of terms for reasons of practicality. Here, we instead use a variation-of-parameters approach to arrive at a different representation of the identical solution that does not require a series expansion. The primary benefits of the new approach are speed of computation of the transfer function and the removal of the requirement to approximate the transfer function by truncating the number of terms evaluated. Results show that the speedup of the new method can be more than 3800.

  20. Quasiparticle self-consistent GW method for the spectral properties of complex materials.

    PubMed

    Bruneval, Fabien; Gatti, Matteo

    2014-01-01

    The GW approximation to the formally exact many-body perturbation theory has been applied successfully to materials for several decades. Since the practical calculations are extremely cumbersome, the GW self-energy is most commonly evaluated using a first-order perturbative approach: This is the so-called G 0 W 0 scheme. However, the G 0 W 0 approximation depends heavily on the mean-field theory that is employed as a basis for the perturbation theory. Recently, a procedure to reach a kind of self-consistency within the GW framework has been proposed. The quasiparticle self-consistent GW (QSGW) approximation retains some positive aspects of a self-consistent approach, but circumvents the intricacies of the complete GW theory, which is inconveniently based on a non-Hermitian and dynamical self-energy. This new scheme allows one to surmount most of the flaws of the usual G 0 W 0 at a moderate calculation cost and at a reasonable implementation burden. In particular, the issues of small band gap semiconductors, of large band gap insulators, and of some transition metal oxides are then cured. The QSGW method broadens the range of materials for which the spectral properties can be predicted with confidence.

  1. Monte Carlo simulations of medical imaging modalities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estes, G.P.

    Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computermore » power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.« less

  2. Surface interactions and high-voltage current collection

    NASA Technical Reports Server (NTRS)

    Mandell, M. J.; Katz, I.

    1985-01-01

    Spacecraft of the future will be larger and have higher power requirements than any flown to date. For several reasons, it is desirable to operate a high power system at high voltage. While the optimal voltages for many future missions are in the range 500 to 5000 volts, the highest voltage yet flown is approximately 100 volts. The NASCAP/LEO code is being developed to embody the phenomenology needed to model the environmental interactions of high voltage spacecraft. Some plasma environment are discussed. The treatment of the surface conductivity associated with emitted electrons and some simulations by NASCAP/LEO of ground based high voltage interaction experiments are described.

  3. [Cognitive errors in diagnostic decision making].

    PubMed

    Gäbler, Martin

    2017-10-01

    Approximately 10-15% of our diagnostic decisions are faulty and may lead to unfavorable and dangerous outcomes, which could be avoided. These diagnostic errors are mainly caused by cognitive biases in the diagnostic reasoning process.Our medical diagnostic decision-making is based on intuitive "System 1" and analytical "System 2" diagnostic decision-making and can be deviated by unconscious cognitive biases.These deviations can be positively influenced on a systemic and an individual level. For the individual, metacognition (internal withdrawal from the decision-making process) and debiasing strategies, such as verification, falsification and rule out worst-case scenarios, can lead to improved diagnostic decisions making.

  4. Influence of collision on the flow through in-vitro rigid models of the vocal folds

    NASA Astrophysics Data System (ADS)

    Deverge, M.; Pelorson, X.; Vilain, C.; Lagrée, P.-Y.; Chentouf, F.; Willems, J.; Hirschberg, A.

    2003-12-01

    Measurements of pressure in oscillating rigid replicas of vocal folds are presented. The pressure upstream of the replica is used as input to various theoretical approximations to predict the pressure within the glottis. As the vocal folds collide the classical quasisteady boundary layer theory fails. It appears however that for physiologically reasonable shapes of the replicas, viscous effects are more important than the influence of the flow unsteadiness due to the wall movement. A simple model based on a quasisteady Bernoulli equation corrected for viscous effect, combined with a simple boundary layer separation model does globally predict the observed pressure behavior.

  5. On the RNG theory of turbulence

    NASA Technical Reports Server (NTRS)

    Lam, S. H.

    1992-01-01

    The Yakhot and Orszag (1986) renormalization group (RNG) theory of turbulence has generated a number of scaling law constants in reasonable quantitative agreement with experiments. The theory itself is highly mathematical, and its assumptions and approximations are not easily appreciated. The present paper reviews the RNG theory and recasts it in more conventional terms using a distinctly different viewpoint. A new formulation based on an alternative interpretation of the origin of the random force is presented, showing that the artificially introduced epsilon in the original theory is an adjustable parameter, thus offering a plausible explanation for the remarkable record of quantitative success of the so-called epsilon-expansion procedure.

  6. Logo recognition in video by line profile classification

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Hanjalic, Alan

    2003-12-01

    We present an extension to earlier work on recognizing logos in video stills. The logo instances considered here are rigid planar objects observed at a distance in the scene, so the possible perspective transformation can be approximated by an affine transformation. For this reason we can classify the logos by matching (invariant) line profiles. We enhance our previous method by considering multiple line profiles instead of a single profile of the logo. The positions of the lines are based on maxima in the Hough transform space of the segmented logo foreground image. Experiments are performed on MPEG1 sport video sequences to show the performance of the proposed method.

  7. Nonlinear Ocean Waves

    DTIC Science & Technology

    1994-09-30

    equation due to Kadomtsev & Petviashvili (1970), Dx(atu + 6 ui)u + a8 3U) + 3 ay2u = 0, (KP) is known to describe approximately the evolution of...to be stable to perturbations, and their amplitudes need not be small. The Kadomtsev - Petviashvili (KP) equation is known to describe approximately the...predicted with reasonable accuracy by a family of exact solutions of an equation due to Kadomtsev and Petviashvili (1970): (ft + 6 ffx + f )x + 3fyy

  8. Development of Parameters for the Collection and Analysis of Lidar at Military Munitions Sites

    DTIC Science & Technology

    2010-01-01

    and inertial measurement unit (IMU) equipment is used to locate the sensor in the air . The time of return of the laser signal allows for the...approximately 15 centimeters (cm) on soft ground surfaces and a horizontal accuracy of approximately 60 cm, both compared to surveyed control points...provide more accurate topographic data than other sources, at a reasonable cost compared to alternatives such as ground survey or photogrammetry

  9. Rule-Based Reasoning Is Fast and Belief-Based Reasoning Can Be Slow: Challenging Current Explanations of Belief-Bias and Base-Rate Neglect

    ERIC Educational Resources Information Center

    Newman, Ian R.; Gibb, Maia; Thompson, Valerie A.

    2017-01-01

    It is commonly assumed that belief-based reasoning is fast and automatic, whereas rule-based reasoning is slower and more effortful. Dual-Process theories of reasoning rely on this speed-asymmetry explanation to account for a number of reasoning phenomena, such as base-rate neglect and belief-bias. The goal of the current study was to test this…

  10. Collision cross sections of N2 by H+ impact at keV energies within time-dependent density-functional theory

    NASA Astrophysics Data System (ADS)

    Yu, W.; Gao, C.-Z.; Zhang, Y.; Zhang, F. S.; Hutton, R.; Zou, Y.; Wei, B.

    2018-03-01

    We calculate electron capture and ionization cross sections of N2 impacted by the H+ projectile at keV energies. To this end, we employ the time-dependent density-functional theory coupled nonadiabatically to molecular dynamics. To avoid the explicit treatment of the complex density matrix in the calculation of cross sections, we propose an approximate method based on the assumption of constant ionization rate over the period of the projectile passing the absorbing boundary. Our results agree reasonably well with experimental data and semi-empirical results within the measurement uncertainties in the considered energy range. The discrepancies are mainly attributed to the inadequate description of exchange-correlation functional and the crude approximation for constant ionization rate. Although the present approach does not predict the experiments quantitatively for collision energies below 10 keV, it is still helpful to calculate total cross sections of ion-molecule collisions within a certain energy range.

  11. Cometary water-group ions in the region surrounding Comet Giacobini-Zinner - Distribution functions and bulk parameter estimates

    NASA Astrophysics Data System (ADS)

    Staines, K.; Balogh, A.; Cowley, S. W. H.; Hynds, R. J.; Yates, T. S.; Richardson, I. G.; Sanderson, T. R.; Wenzel, K. P.; McComas, D. J.; Tsurutani, B. T.

    1991-03-01

    The bulk parameters (number density and thermal energy density) of cometary water-group ions in the region surrounding Comet Giacobini-Zinner have been derived using data from the EPAS instrument on the ICE spacecraft. The derivation is based on the assumption that the pick-up ion distribution function is isotropic in the frame of the bulk flow, an approximation which has previously been shown to be reasonable within about 400,000 km of the comet nucleus along the spacecraft trajectory. The transition between the pick-up and mass-loaded regions occurs at the cometary shock, which was traversed at a cometocentric distance of about 100,000 km along the spacecraft track. Examination of the ion distribution functions in this region, transformed to the bulk flow frame, indicates the occurrence of a flattened distribution in the vicinity of the local pick-up speed, and a steeply falling tail at speeds above, which may be approximated as an exponential in ion speed.

  12. Prediction of meat spectral patterns based on optical properties and concentrations of the major constituents.

    PubMed

    ElMasry, Gamal; Nakauchi, Shigeki

    2016-03-01

    A simulation method for approximating spectral signatures of minced meat samples was developed depending on concentrations and optical properties of the major chemical constituents. Minced beef samples of different compositions scanned on a near-infrared spectroscopy and on a hyperspectral imaging system were examined. Chemical composition determined heuristically and optical properties collected from authenticated references were simulated to approximate samples' spectral signatures. In short-wave infrared range, the resulting spectrum equals the sum of the absorption of three individual absorbers, that is, water, protein, and fat. By assuming homogeneous distributions of the main chromophores in the mince samples, the obtained absorption spectra are found to be a linear combination of the absorption spectra of the major chromophores present in the sample. Results revealed that developed models were good enough to derive spectral signatures of minced meat samples with a reasonable level of robustness of a high agreement index value more than 0.90 and ratio of performance to deviation more than 1.4.

  13. Evolution of microstructure and elastic wave velocities in dehydrated gypsum samples

    NASA Astrophysics Data System (ADS)

    Milsch, Harald; Priegnitz, Mike

    2012-12-01

    We report on changes in P and S-wave velocities and rock microstructure induced by devolatilization reactions using gypsum as a reference analog material. Cylindrical samples of natural alabaster were dehydrated in air, at ambient pressure, and temperatures between 378 and 423 K. Dehydration did not proceed homogeneously but via a reaction front moving sample inwards separating an outer highly porous rim from the remaining gypsum which, above approximately 393 (±5) K, concurrently decomposed into hemihydrate. Overall porosity was observed to continuously increase with reaction progress from approximately 2% for fully hydrated samples to 30% for completely dehydrated ones. Concurrently, P and S-wave velocities linearly decreased with porosity from 5.2 and 2.7 km/s to 1.0 and 0.7 km/s, respectively. It is concluded that a linearized empirical Raymer-type model extended by a critical porosity term and based on the respective time dependent mineral and pore volumes reasonably replicates the P and S-wave data in relation to reaction progress and porosity.

  14. Automatic Detection of Driver Fatigue Using Driving Operation Information for Transportation Safety

    PubMed Central

    Li, Zuojin; Chen, Liukui; Peng, Jun; Wu, Ying

    2017-01-01

    Fatigued driving is a major cause of road accidents. For this reason, the method in this paper is based on the steering wheel angles (SWA) and yaw angles (YA) information under real driving conditions to detect drivers’ fatigue levels. It analyzes the operation features of SWA and YA under different fatigue statuses, then calculates the approximate entropy (ApEn) features of a short sliding window on time series. Using the nonlinear feature construction theory of dynamic time series, with the fatigue features as input, designs a “2-6-6-3” multi-level back propagation (BP) Neural Networks classifier to realize the fatigue detection. An approximately 15-h experiment is carried out on a real road, and the data retrieved are segmented and labeled with three fatigue levels after expert evaluation, namely “awake”, “drowsy” and “very drowsy”. The average accuracy of 88.02% in fatigue identification was achieved in the experiment, endorsing the value of the proposed method for engineering applications. PMID:28587072

  15. Automatic Detection of Driver Fatigue Using Driving Operation Information for Transportation Safety.

    PubMed

    Li, Zuojin; Chen, Liukui; Peng, Jun; Wu, Ying

    2017-05-25

    Fatigued driving is a major cause of road accidents. For this reason, the method in this paper is based on the steering wheel angles (SWA) and yaw angles (YA) information under real driving conditions to detect drivers' fatigue levels. It analyzes the operation features of SWA and YA under different fatigue statuses, then calculates the approximate entropy (ApEn) features of a short sliding window on time series. Using the nonlinear feature construction theory of dynamic time series, with the fatigue features as input, designs a "2-6-6-3" multi-level back propagation (BP) Neural Networks classifier to realize the fatigue detection. An approximately 15-h experiment is carried out on a real road, and the data retrieved are segmented and labeled with three fatigue levels after expert evaluation, namely "awake", "drowsy" and "very drowsy". The average accuracy of 88.02% in fatigue identification was achieved in the experiment, endorsing the value of the proposed method for engineering applications.

  16. A numerical and experimental study on the nonlinear evolution of long-crested irregular waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goullet, Arnaud; Choi, Wooyoung; Division of Ocean Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 305-701

    2011-01-15

    The spatial evolution of nonlinear long-crested irregular waves characterized by the JONSWAP spectrum is studied numerically using a nonlinear wave model based on a pseudospectral (PS) method and the modified nonlinear Schroedinger (MNLS) equation. In addition, new laboratory experiments with two different spectral bandwidths are carried out and a number of wave probe measurements are made to validate these two wave models. Strongly nonlinear wave groups are observed experimentally and their propagation and interaction are studied in detail. For the comparison with experimental measurements, the two models need to be initialized with care and the initialization procedures are described. Themore » MNLS equation is found to approximate reasonably well for the wave fields with a relatively smaller Benjamin-Feir index, but the phase error increases as the propagation distance increases. The PS model with different orders of nonlinear approximation is solved numerically, and it is shown that the fifth-order model agrees well with our measurements prior to wave breaking for both spectral bandwidths.« less

  17. Probability distribution of haplotype frequencies under the two-locus Wright-Fisher model by diffusion approximation.

    PubMed

    Boitard, Simon; Loisel, Patrice

    2007-05-01

    The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.

  18. Buckling Of Shells Of Revolution /BOSOR/ with various wall constructions

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Bushnell, D.; Sobel, L. H.

    1969-01-01

    Computer program, using numerical integration and finite difference techniques, solves almost any buckling problem for shells exhibiting orthotropic behavior. Stability analyses can be performed with reasonable accuracy and without unduly restrictive approximations.

  19. Case-based reasoning for space applications: Utilization of prior experience in knowledge-based systems

    NASA Technical Reports Server (NTRS)

    King, James A.

    1987-01-01

    The goal is to explain Case-Based Reasoning as a vehicle to establish knowledge-based systems based on experimental reasoning for possible space applications. This goal will be accomplished through an examination of reasoning based on prior experience in a sample domain, and also through a presentation of proposed space applications which could utilize Case-Based Reasoning techniques.

  20. Intrusion-based reasoning and depression: cross-sectional and prospective relationships.

    PubMed

    Berle, David; Moulds, Michelle L

    2014-01-01

    Intrusion-based reasoning refers to the tendency to form interpretations about oneself or a situation based on the occurrence of a negative intrusive autobiographical memory. Intrusion-based reasoning characterises post-traumatic stress disorder, but has not yet been investigated in depression. We report two studies that aimed to investigate this. In Study 1 both high (n = 42) and low (n = 28) dysphoric participants demonstrated intrusion-based reasoning. High-dysphoric individuals engaged in self-referent intrusion-based reasoning to a greater extent than did low-dysphoric participants. In Study 2 there were no significant differences in intrusion-based reasoning between currently depressed (n = 27) and non-depressed (n = 51) participants, and intrusion-based reasoning did not predict depressive symptoms at 6-month follow-up. Interestingly, previously (n = 26) but not currently (n = 27) depressed participants engaged in intrusion-based reasoning to a greater extent than never-depressed participants (n = 25), indicating the possibility that intrusion-based reasoning may serve as a "scar" from previous episodes. The implications of these findings are discussed.

  1. Approximate Model of Zone Sedimentation

    NASA Astrophysics Data System (ADS)

    Dzianik, František

    2011-12-01

    The process of zone sedimentation is affected by many factors that are not possible to express analytically. For this reason, the zone settling is evaluated in practice experimentally or by application of an empirical mathematical description of the process. The paper presents the development of approximate model of zone settling, i.e. the general function which should properly approximate the behaviour of the settling process within its entire range and at the various conditions. Furthermore, the specification of the model parameters by the regression analysis of settling test results is shown. The suitability of the model is reviewed by graphical dependencies and by statistical coefficients of correlation. The approximate model could by also useful on the simplification of process design of continual settling tanks and thickeners.

  2. Classifying Drivers' Cognitive Load Using EEG Signals.

    PubMed

    Barua, Shaibal; Ahmed, Mobyen Uddin; Begum, Shahina

    2017-01-01

    A growing traffic safety issue is the effect of cognitive loading activities on traffic safety and driving performance. To monitor drivers' mental state, understanding cognitive load is important since while driving, performing cognitively loading secondary tasks, for example talking on the phone, can affect the performance in the primary task, i.e. driving. Electroencephalography (EEG) is one of the reliable measures of cognitive load that can detect the changes in instantaneous load and effect of cognitively loading secondary task. In this driving simulator study, 1-back task is carried out while the driver performs three different simulated driving scenarios. This paper presents an EEG based approach to classify a drivers' level of cognitive load using Case-Based Reasoning (CBR). The results show that for each individual scenario as well as using data combined from the different scenarios, CBR based system achieved approximately over 70% of classification accuracy.

  3. Orientational analysis of planar fibre systems observed as a Poisson shot-noise process.

    PubMed

    Kärkkäinen, Salme; Lantuéjoul, Christian

    2007-10-01

    We consider two-dimensional fibrous materials observed as a digital greyscale image. The problem addressed is to estimate the orientation distribution of unobservable thin fibres from a greyscale image modelled by a planar Poisson shot-noise process. The classical stereological approach is not straightforward, because the point intensities of thin fibres along sampling lines may not be observable. For such cases, Kärkkäinen et al. (2001) suggested the use of scaled variograms determined from grey values along sampling lines in several directions. Their method is based on the assumption that the proportion between the scaled variograms and point intensities in all directions of sampling lines is constant. This assumption is proved to be valid asymptotically for Boolean models and dead leaves models, under some regularity conditions. In this work, we derive the scaled variogram and its approximations for a planar Poisson shot-noise process using the modified Bessel function. In the case of reasonable high resolution of the observed image, the scaled variogram has an approximate functional relation to the point intensity, and in the case of high resolution the relation is proportional. As the obtained relations are approximative, they are tested on simulations. The existing orientation analysis method based on the proportional relation is further experimented on images with different resolutions. The new result, the asymptotic proportionality between the scaled variograms and the point intensities for a Poisson shot-noise process, completes the earlier results for the Boolean models and for the dead leaves models.

  4. Optimal placement of multiple types of communicating sensors with availability and coverage redundancy constraints

    NASA Astrophysics Data System (ADS)

    Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.

    2010-04-01

    Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.

  5. Monitoring a boreal wildfire using multi-temporal Radarsat-1 intensity and coherence images

    USGS Publications Warehouse

    Rykhus, Russell P.; Lu, Zhong

    2011-01-01

    Twenty-five C-band Radarsat-1 synthetic aperture radar (SAR) images acquired from the summer of 2002 to the summer of 2005 are used to map a 2003 boreal wildfire (B346) in the Yukon Flats National Wildlife Refuge, Alaska under conditions of near-persistent cloud cover. Our analysis is primarily based on the 15 SAR scenes acquired during arctic growing seasons. The Radarsat-1 intensity data are used to map the onset and progression of the fire, and interferometric coherence images are used to qualify burn severity and monitor post-fire recovery. We base our analysis of the fire on three test sites, two from within the fire and one unburned site. The B346 fire increased backscattered intensity values for the two burn study sites by approximately 5–6 dB and substantially reduced coherence from background levels of approximately 0.8 in unburned background forested areas to approximately 0.2 in the burned area. Using ancillary vegetation information from the National Land Cover Database (NLCD) and information on burn severity from Normalized Burn Ratio (NBR) data, we conclude that burn site 2 was more severely burned than burn site 1 and that C-band interferometric coherence data are useful for mapping landscape changes due to fire. Differences in burn severity and topography are determined to be the likely reasons for the observed differences in post-fire intensity and coherence trends between burn sites.

  6. Information Uncertainty to Compare Qualitative Reasoning Security Risk Assessment Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavez, Gregory M; Key, Brian P; Zerkle, David K

    2009-01-01

    The security risk associated with malevolent acts such as those of terrorism are often void of the historical data required for a traditional PRA. Most information available to conduct security risk assessments for these malevolent acts is obtained from subject matter experts as subjective judgements. Qualitative reasoning approaches such as approximate reasoning and evidential reasoning are useful for modeling the predicted risk from information provided by subject matter experts. Absent from these approaches is a consistent means to compare the security risk assessment results. Associated with each predicted risk reasoning result is a quantifiable amount of information uncertainty which canmore » be measured and used to compare the results. This paper explores using entropy measures to quantify the information uncertainty associated with conflict and non-specificity in the predicted reasoning results. The measured quantities of conflict and non-specificity can ultimately be used to compare qualitative reasoning results which are important in triage studies and ultimately resource allocation. Straight forward extensions of previous entropy measures are presented here to quantify the non-specificity and conflict associated with security risk assessment results obtained from qualitative reasoning models.« less

  7. 76 FR 22724 - Notice of Public Meeting of the Carrizo Plain National Monument Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-22

    ... School, located approximately 2 miles northwest of Soda Lake Road on Highway 58. The meeting will begin... special assistance such as sign language interpretation or other reasonable accommodations should contact...

  8. The Torsion of Members Having Sections Common in Aircraft Construction

    NASA Technical Reports Server (NTRS)

    Trayer, George W; March, H W

    1930-01-01

    Within recent years a great variety of approximate torsion formulas and drafting-room processes have been advocated. In some of these, especially where mathematical considerations are involved, the results are extremely complex and are not generally intelligible to engineers. The principal object of this investigation was to determine by experiment and theoretical investigation how accurate the more common of these formulas are and on what assumptions they are founded and, if none of the proposed methods proved to be reasonable accurate in practice, to produce simple, practical formulas from reasonably correct assumptions, backed by experiment. A second object was to collect in readily accessible form the most useful of known results for the more common sections. Formulas for all the important solid sections that have yielded to mathematical treatment are listed. Then follows a discussion of the torsion of tubular rods with formulas both rigorous and approximate.

  9. First-order shock acceleration in solar flares

    NASA Technical Reports Server (NTRS)

    Ellison, D. C.; Ramaty, R.

    1985-01-01

    The first order Fermi shock acceleration model is compared with specific observations where electron, proton, and alpha particle spectra are available. In all events, it is found that a single shock with a compression ratio as inferred from the low energy proton spectra can reasonably produce the full proton, electron, and alpha particle spectra. The model predicts that the acceleration time to a given energy will be approximately equal for electrons and protons and, for reasonable solar parameters, can be less than 1 sec to 100 MeV.

  10. Hispanic Youth--Dropout Prevention. Report of the Task Force on the Participation of Hispanic Students in Vocational Education Programs = La Joventud Hispana. Reporte del Grupo Especial. La Investigacion de la Participacion de los Estudiantes Hispanos en la Educacion Relativa a la Vocacion.

    ERIC Educational Resources Information Center

    Idaho State Dept. of Education, Boise. Div. of Vocational Education.

    An Idaho task force of Hispanic Americans, industry representatives, and education leaders studied the reasons Hispanic students were not enrolling in and completing vocational education programs. The task force sponsored a series of community meetings to identify reasons and solutions. Approximately 40-60 parents, students, and other interested…

  11. Approximate thermochemical tables for some C-H and C-H-O species

    NASA Technical Reports Server (NTRS)

    Bahn, G. S.

    1973-01-01

    Approximate thermochemical tables are presented for some C-H and C-H-O species and for some ionized species, supplementing the JANAF Thermochemical Tables for application to finite-chemical-kinetics calculations. The approximate tables were prepared by interpolation and extrapolation of limited available data, especially by interpolations over chemical families of species. Original estimations have been smoothed by use of a modification for the CDC-6600 computer of the Lewis Research Center PACl Program which was originally prepared for the IBM-7094 computer Summary graphs for various families show reasonably consistent curvefit values, anchored by properties of existing species in the JANAF tables.

  12. An analysis of 12th-grade students' reasoning styles and competencies when presented with an environmental problem in a social and scientific context

    NASA Astrophysics Data System (ADS)

    Yang, Fang-Ying

    This study examined reasoning and problem solving by 182 12th grade students in Taiwan when considering a socio-scientific issue regarding the use of nuclear energy. Students' information preferences, background characteristics, and eleven everyday scientific thinking skills were scrutinized. It was found most participants displayed a willingness to take into account both scientific and social information in reasoning the merits of a proposed construction of a nuclear power plant. Students' reasoning scores obtained from the "information reasoning style" test ranged from -0.5 to 1.917. And, the distribution was approximately normal with mean and median at around 0.5. For the purpose of categorization, students whose scores were within one standard deviation from the mean were characterized as having a "equally disposed" reasoning style. One hundred and twenty-five subjects, about 69%, belonged to this category. Students with scores locating at the two tails of the distribution were assigned to either the "scientifically oriented" or the "socially oriented" reasoning category. Among 23 background characteristics investigated using questionnaire data and ANOVA statistical analysis, only students' science performance and knowledge about nuclear energy were statistically significantly related to their information reasoning styles (p < 0.05). The assessed background characteristics addressed dimensions such as gender, academic performances, class difference, future education, career expectation, commitment to study, assessment to educational enrichment, family conditions, epistemological views about science, religion, and the political party preference. For everyday scientific thinking skills, interview data showed that both "scientifically oriented" students and those who were categorized as "equally disposed to using scientific and social scientific sources of data" displayed higher frequencies than "socially oriented" ones in using these skills, except in the use of the "multidisciplinary thinking" skill. Among the 11 skills assessed, the "scientifically oriented" students outperformed the "equally disposed" ones only in the use of 3 thinking skills; namely, searching for or recalling scientific concepts/evidence, recognizing and evaluating alternatives, and making conclusions based on the scientific intuition.

  13. Detecting Edges in Images by Use of Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2003-01-01

    A method of processing digital image data to detect edges includes the use of fuzzy reasoning. The method is completely adaptive and does not require any advance knowledge of an image. During initial processing of image data at a low level of abstraction, the nature of the data is indeterminate. Fuzzy reasoning is used in the present method because it affords an ability to construct useful abstractions from approximate, incomplete, and otherwise imperfect sets of data. Humans are able to make some sense of even unfamiliar objects that have imperfect high-level representations. It appears that to perceive unfamiliar objects or to perceive familiar objects in imperfect images, humans apply heuristic algorithms to understand the images

  14. [Venous thromboembolic risk during repatriation for medical reasons].

    PubMed

    Stansal, A; Perrier, E; Coste, S; Bisconte, S; Manen, O; Lazareth, I; Conard, J; Priollet, P

    2015-12-01

    In France, approximately 3000 people are repatriated every year, either in a civil situation by insurers. Repatriation also concerns French army soldiers. The literature is scarce on the topic of venous thromboembolic risk and its prevention during repatriation for medical reasons, a common situation. Most studies have focused on the association between venous thrombosis and travel, a relationship recognized more than 60 years ago but still subject to debate. Examining the degree of venous thromboembolic risk during repatriation for medical reasons must take into account several parameters, related to the patient, to comorbid conditions and to repatriation modalities. Appropriate prevention must be determined on an individual basis. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  15. Survey of HEPA filter applications and experience at Department of Energy sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carbaugh, E.H.

    1981-11-01

    Results indicated that approximately 58% of the filters surveyed were changed out in the 1977 to 1979 study period and some 18% of all filters were changed out more than once. Most changeouts (60%) were due to the existence of a high pressure drop across the filter, indicative of filter plugging. The next most recurrent reasons for changeout and their percentage changeouts were leak test failure (15%) and preventive maintenance service life limit (12%). An average filter service life was calculated to be 3.0 years with a 2.0-year standard deviation. The labor required for filter changeout was calculated as 1.5more » manhours per filter changed. Filter failures occurred with approximately 12% of all installed filters. Most failures (60%) occurred for unknown reasons and handling or installation damage accounted for an additional 20% of all failures. Media ruptures, filter frame failures and seal failures occurred with approximately equal frequency at 5 to 6% each. Subjective responses to the questionnaire indicate problems are: need for improved acid and moisture resistant filters; filters more readily disposable as radioactive waste; improved personnel training in filter handling and installation; and need for pretreatment of air prior to HEPA filtration.« less

  16. Patient views and correlates of radiotherapy omission in a population-based sample of older women with favorable-prognosis breast cancer.

    PubMed

    Shumway, Dean A; Griffith, Kent A; Hawley, Sarah T; Wallner, Lauren P; Ward, Kevin C; Hamilton, Ann S; Morrow, Monica; Katz, Steven J; Jagsi, Reshma

    2018-04-18

    The omission of radiotherapy (RT) after lumpectomy is a reasonable option for many older women with favorable-prognosis breast cancer. In the current study, we sought to evaluate patient perspectives regarding decision making about RT. Women aged 65 to 79 years with AJCC 7th edition stage I and II breast cancer who were reported to the Georgia and Los Angeles County Surveillance, Epidemiology, and End Results registries were surveyed (response rate, 70%) regarding RT decisions, the rationale for omitting RT, decision-making values, and understanding of disease recurrence risk. We also surveyed their corresponding surgeons (response rate, 77%). Patient characteristics associated with the omission of RT were evaluated using multilevel, multivariable logistic regression, accounting for patient clustering within surgeons. Of 999 patients, 135 omitted RT (14%). Older age, lower tumor grade, and having estrogen receptor-positive disease each were found to be strongly associated with omission of RT in multivariable analyses, whereas the number of comorbidities was not. Non-English speakers were more likely to omit RT (adjusted odds ratio, 5.9; 95% confidence interval, 1.4-24.5). The most commonly reported reasons for RT omission were that a physician advised the patient that it was not needed (54% of patients who omitted RT) and patient choice (41%). Risk of local disease recurrence was overestimated by all patients: by approximately 2-fold among those who omitted RT and by approximately 8-fold among those who received RT. The risk of distant disease recurrence was overestimated by approximately 3-fold on average. To some extent, decisions regarding RT omission are appropriately influenced by patient age, tumor grade, and estrogen receptor status, but do not appear to be optimally tailored according to competing comorbidities. Many women who are candidates for RT omission overestimate their risk of disease recurrence. Cancer 2018. © 2018 American Cancer Society. © 2018 American Cancer Society.

  17. Studying medicine – a cross-sectional questionnaire-based analysis of the motivational factors which influence graduate and undergraduate entrants in Ireland

    PubMed Central

    Sulong, Saadah; McGrath, Deirdre; Finucane, Paul; Horgan, Mary; O’Flynn, Siún

    2014-01-01

    Summary Objectives The number of places available in Ireland for graduate entry to medical school has steadily increased since 2006. Few studies have, however, characterized the motivational factors underlying decision to study medicine via this route. We compared the factors motivating graduate entrants versus undergraduate entry (UGE) students to choose medicine as a course of study. Design The present study was a quantitative cross-sectional questionnaire-based investigation. Setting The study was conducted in University College Cork and University of Limerick, Ireland. Participants It involved 185 graduate entry (GE) and 120 UGE students. Outcome measures Questionnaires were distributed to students addressing the following areas: demographic/academic characteristics; factors influencing the selection of academic institution and motivation to study medicine; and the role of career guidance in choice of study. Results When asked to list reasons for selecting medicine, both groups listed a wish to help and work with people, and a desire to prevent and cure disease. UGE students were significantly more motivated by intellectual satisfaction, encouragement by family/friends, financial reasons, and professional independence. Approximately half of GE students selected their first degree with a view to potentially studying medicine in the future. GE and UGE students differed significantly with respect to sources consulted for career guidance and source of study information. Conclusions This study is the first systematic examination of study and career motivation in GE medical students since the programme was offered by Irish universities and provides insight into the reasons why graduate entrants in Ireland choose to study medicine via this route. PMID:25057383

  18. Equal Plate Charges on Series Capacitors?

    ERIC Educational Resources Information Center

    Illman, B. L.; Carlson, G. T.

    1994-01-01

    Provides a line of reasoning in support of the contention that the equal charge proposition is at best an approximation. Shows how the assumption of equal plate charge on capacitors in series contradicts the conservative nature of the electric field. (ZWH)

  19. An empirical relationship between mesoscale carbon monoxide concentrations and vehicular emission rates : final report.

    DOT National Transportation Integrated Search

    1979-01-01

    Presented is a relatively simple empirical equation that reasonably approximates the relationship between mesoscale carbon monoxide (CO) concentrations, areal vehicular CO emission rates, and the meteorological factors of wind speed and mixing height...

  20. Approximate Reasoning: Past, Present, Future

    DTIC Science & Technology

    1990-06-27

    This note presents a personal view of the state of the art in the representation and manipulation of imprecise and uncertain information by automated ... processing systems. To contrast their objectives and characteristics with the sound deductive procedures of classical logic, methodologies developed

  1. Advanced Concepts and Methods of Approximate Reasoning

    DTIC Science & Technology

    1989-12-01

    immeasurably by numerous conversations and discussions with Nadal Bat- tle, Hamid Berenji , Piero Bonissone, Bernadette Bouchon-Meunier, Miguel Delgado, Di...comments of Claudi Alsina, Hamid Berenji , Piero Bonissone, Didier Dubois, Francesc Esteva, Oscar Firschein, Marty Fischler, Pascal Fua, Maria Angeles

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Wenkai; Ghosh, Priyarshini; Harrison, Mark

    The performance of traditional Hornyak buttons and two proposed variants for fast-neutron hodoscope applications was evaluated using Geant4. The Hornyak button is a ZnS(Ag)-based device previously deployed at the Idaho National Laboratory's TRansient REActor Test Facility (better known as TREAT) for monitoring fast neutrons emitted during pulsing of fissile fuel samples. Past use of these devices relied on pulse-shape discrimination to reduce the significant levels of background Cherenkov radiation. Proposed are two simple designs that reduce the overall light guide mass (here, polymethyl methacrylate or PMMA), employ silicon photomultipliers (SiPMs), and can be operated using pulse-height discrimination alone to eliminatemore » background noise to acceptable levels. Geant4 was first used to model a traditional Hornyak button, and for assumed, hodoscope-like conditions, an intrinsic efficiency of 0.35% for mono-directional fission neutrons was predicted. The predicted efficiency is in reasonably good agreement with experimental data from the literature and, hence, served to validate the physics models and approximations employed. Geant4 models were then developed to optimize the materials and geometries of two alternatives to the Hornyak button, one based on a homogeneous mixture of ZnS(Ag) and PMMA, and one based on alternating layers of ZnS(Ag) and PMMA oriented perpendicular to the incident neutron beam. For the same radiation environment, optimized, 5-cm long (along the beam path) devices of the homogeneous and layered designs were predicted to have efficiencies of approximately 1.3% and 3.3%, respectively. For longer devices, i.e., lengths larger than 25 cm, these efficiencies were shown to peak at approximately 2.2% and 5.9%, respectively. Furthermore, both designs were shown to discriminate Cherenkov noise intrinsically by using an appropriate pulse-height discriminator level, i.e., pulse-shape discrimination is not needed for these devices.« less

  3. Experimental validation of a quasi-steady theory for the flow through the glottis

    NASA Astrophysics Data System (ADS)

    Vilain, C. E.; Pelorson, X.; Fraysse, C.; Deverge, M.; Hirschberg, A.; Willems, J.

    2004-09-01

    In this paper a theoretical description of the flow through the glottis based on a quasi-steady boundary layer theory is presented. The Thwaites method is used to solve the von Kármán equations within the boundary layers. In practice this makes the theory much easier to use compared to Pohlhausen's polynomial approximations. This theoretical description is evaluated on the basis of systematic comparison with experimental data obtained under steady flow or unsteady (oscillating) flow without and with moving vocal folds. Results tend to show that the theory reasonably explains the measured data except when unsteady or viscous terms become predominant. This happens particularly during the collision of the vocal folds.

  4. Gas Evolution Dynamics in Godunov-Type Schemes and Analysis of Numerical Shock Instability

    NASA Technical Reports Server (NTRS)

    Xu, Kun

    1999-01-01

    In this paper we are going to study the gas evolution dynamics of the exact and approximate Riemann solvers, e.g., the Flux Vector Splitting (FVS) and the Flux Difference Splitting (FDS) schemes. Since the FVS scheme and the Kinetic Flux Vector Splitting (KFVS) scheme have the same physical mechanism and similar flux function, based on the analysis of the discretized KFVS scheme the weakness and advantage of the FVS scheme are closely observed. The subtle dissipative mechanism of the Godunov method in the 2D case is also analyzed, and the physical reason for shock instability, i.e., carbuncle phenomena and odd-even decoupling, is presented.

  5. Oxygenation level and hemoglobin concentration in experimental tumor estimated by diffuse optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Orlova, A. G.; Kirillin, M. Yu.; Volovetsky, A. B.; Shilyagina, N. Yu.; Sergeeva, E. A.; Golubiatnikov, G. Yu.; Turchin, I. V.

    2017-07-01

    Using diffuse optical spectroscopy the level of oxygenation and hemoglobin concentration in experimental tumor in comparison with normal muscle tissue of mice have been studied. Subcutaneously growing SKBR-3 was used as a tumor model. Continuous wave fiber probe diffuse optical spectroscopy system was employed. Optical properties extraction approach was based on diffusion approximation. Decreased blood oxygen saturation level and increased total hemoglobin content were demonstrated in the neoplasm. The main reason of such differences between tumor and norm was significant elevation of deoxyhemoglobin concentration in SKBR-3. The method can be useful for diagnosis of tumors as well as for study of blood flow parameters of tumor models with different angiogenic properties.

  6. New device architecture of a thermoelectric energy conversion for recovering low-quality heat

    NASA Astrophysics Data System (ADS)

    Kim, Hoon; Park, Sung-Geun; Jung, Buyoung; Hwang, Junphil; Kim, Woochul

    2014-03-01

    Low-quality heat is generally discarded for economic reasons; a low-cost energy conversion device considering price per watt, /W, is required to recover this waste heat. Thin-film based thermoelectric devices could be a superior alternative for this purpose, based on their low material consumption; however, power generated in conventional thermoelectric device architecture is negligible due to the small temperature drop across the thin film. To overcome this challenge, we propose new device architecture, and demonstrate approximately 60 Kelvin temperature differences using a thick polymer nanocomposite. The temperature differences were achieved by separating the thermal path from the electrical path; whereas in conventional device architecture, both electrical charges and thermal energy share same path. We also applied this device to harvest body heat and confirmed its usability as an energy conversion device for recovering low-quality heat.

  7. Cultivation of an Interdisciplinary, Research-Based Neuroscience Minor at Hope College

    PubMed Central

    Chase, Leah A.; Stewart, Joanne; Barney, Christopher C.

    2006-01-01

    Hope College is an undergraduate liberal arts college with an enrollment of approximately 3,000 students. In the spring of 2005, we began to offer an interdisciplinary neuroscience minor program that is open to all students. The objective of this program is to introduce students to the field of neuroscience, and to do so in such a way as to broaden students’ disciplinary perspectives, enhance communication and quantitative skills, and increase higher-level reasoning skills by encouraging collaboration among students who have different disciplinary backgrounds. This is a research-based program that culminates in a one-year capstone research course. Here we present the story of the program development at Hope College, including a description of our newly developed curriculum, our initial assessment data, and the lessons we have learned in developing this program. PMID:23493857

  8. Prefrontal and medial temporal contributions to episodic memory-based reasoning.

    PubMed

    Suzuki, Chisato; Tsukiura, Takashi; Mochizuki-Kawai, Hiroko; Shigemune, Yayoi; Iijima, Toshio

    2009-03-01

    Episodic memory retrieval and reasoning are fundamental psychological components of our daily lives. Although previous studies have investigated the brain regions associated with these processes separately, the neural mechanisms of reasoning based on episodic memory retrieval are largely unknown. Here, we investigated the neural correlates underlying episodic memory-based reasoning using functional magnetic resonance imaging (fMRI). During fMRI scanning, subjects performed three tasks: reasoning, episodic memory retrieval, and episodic memory-based reasoning. We identified dissociable activations related to reasoning, episodic memory retrieval, and linking processes between the two. Regions related to reasoning were identified in the left ventral prefrontal cortices (PFC), and those related to episodic memory retrieval were found in the right medial temporal lobe (MTL) regions. In addition, activations predominant in the linking process between the two were found in the left dorsal and right ventral PFC. These findings suggest that episodic memory-based reasoning is composed of at least three processes, i.e., reasoning, episodic memory retrieval, and linking processes between the two, and that activation of both the PFC and MTL is crucial in episodic memory-based reasoning. These findings are the first to demonstrate that PFC and MTL regions contribute differentially to each process in episodic memory-based reasoning.

  9. Testing approximate theories of first-order phase transitions on the two-dimensional Potts model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, C.; Pandit, R.

    The two-dimensional, q-state (q > 4) Potts model is used as a testing ground for approximate theories of first-order phase transitions. In particular, the predictions of a theory analogous to the Ramakrishnan-Yussouff theory of freezing are compared with those of ordinary mean-field (Curie-Wiess) theory. It is found that the Curie-Weiss theory is a better approximation than the Ramakrishnan-Yussouff theory, even though the former neglects all fluctuations. It is shown that the Ramakrishnan-Yussouff theory overestimates the effects of fluctuations in this system. The reasons behind the failure of the Ramakrishnan-Yussouff approximation and the suitability of using the two-dimensional Potts model asmore » a testing ground for these theories are discussed.« less

  10. Measuring Pilot Workload in a Moving-base Simulator. Part 2: Building Levels of Workload

    NASA Technical Reports Server (NTRS)

    Kantowitz, B. H.; Hart, S. G.; Bortolussi, M. R.; Shively, R. J.; Kantowitz, S. C.

    1984-01-01

    Pilot behavior in flight simulators often use a secondary task as an index of workload. His routine to regard flying as the primary task and some less complex task as the secondary task. While this assumption is quite reasonable for most secondary tasks used to study mental workload in aircraft, the treatment of flying a simulator through some carefully crafted flight scenario as a unitary task is less justified. The present research acknowledges that total mental workload depends upon the specific nature of the sub-tasks that a pilot must complete as a first approximation, flight tasks were divided into three levels of complexity. The simplest level (called the Base Level) requires elementary maneuvers that do not utilize all the degrees of freedom of which an aircraft, or a moving-base simulator; is capable. The second level (called the Paired Level) requires the pilot to simultaneously execute two Base Level tasks. The third level (called the Complex Level) imposes three simultaneous constraints upon the pilot.

  11. Safety profile of platinum-based chemotherapy in the treatment of advanced non-small cell lung cancer in elderly patients.

    PubMed

    Rossi, Antonio; Maione, Paolo; Gridelli, Cesare

    2005-11-01

    Non-small cell lung cancer (NSCLC) may be considered typical of advanced age. More than 50% of NSCLC patients are diagnosed at > 65 years of age and approximately one-third of all patients are > 70 years of age. Elderly patients tolerate chemotherapy poorly compared with their younger counterpart because of the progressive reduction of organ function and comorbidities related to age. For this reason, these patients are often not considered eligible for aggressive platinum-based chemotherapy, the standard medical treatment for advanced NSCLC. In clinical practice, single-agent chemotherapy should remain the standard treatment. Feasibility of platinum-based chemotherapy remains an open issue and has to be proven prospectively. Moreover, a multidimensional geriatric assessment for individualised treatment choice in NSCLC elderly patients is mandatory. This review focuses on the currently-available evidences for the treatment of elderly patients affected by advanced NSCLC with regards to the role and safety of platinum-based chemotherapy.

  12. Unanticipated benefits of automotive emission control: reduction in fatalities by motor vehicle exhaust gas.

    PubMed

    Shelef, M

    1994-05-23

    In 1970, before the implementation of strict controls on emissions in motor vehicle exhaust gas (MVEG), the annual USA incidence of fatal accidents by carbon monoxide in the MVEG was approximately 800 and that of suicides approximately 2000 (somewhat less than 10% of total suicides). In 1987, there were approximately 400 fatal accidents and approximately 2700 suicides by MVEG. Accounting for the growth in population and vehicle registration, the yearly lives saved in accidents by MVEG were approximately 1200 in 1987 and avoided suicides approximately 1400. The decrease in accidents continues unabated while the decrease in expected suicides by MVEG reached a plateau in 1981-1983. The reasons for this disparity are discussed. Juxtaposition of these results with the projected cancer risk avoidance of less than 500 annually in 2005 (as compared with 1986) plainly shows that, in terms of mortality, the unanticipated benefits of emission control far overshadow the intended benefits. With the spread of MVEG control these benefits will accrue worldwide.

  13. Convective Dynamics and Disequilibrium Chemistry in the Atmospheres of Giant Planets and Brown Dwarfs

    NASA Astrophysics Data System (ADS)

    Bordwell, Baylee; Brown, Benjamin P.; Oishi, Jeffrey S.

    2018-02-01

    Disequilibrium chemical processes significantly affect the spectra of substellar objects. To study these effects, dynamical disequilibrium has been parameterized using the quench and eddy diffusion approximations, but little work has been done to explore how these approximations perform under realistic planetary conditions in different dynamical regimes. As a first step toward addressing this problem, we study the localized, small-scale convective dynamics of planetary atmospheres by direct numerical simulation of fully compressible hydrodynamics with reactive tracers using the Dedalus code. Using polytropically stratified, plane-parallel atmospheres in 2D and 3D, we explore the quenching behavior of different abstract chemical species as a function of the dynamical conditions of the atmosphere as parameterized by the Rayleigh number. We find that in both 2D and 3D, chemical species quench deeper than would be predicted based on simple mixing-length arguments. Instead, it is necessary to employ length scales based on the chemical equilibrium profile of the reacting species in order to predict quench points and perform chemical kinetics modeling in 1D. Based on the results of our simulations, we provide a new length scale, derived from the chemical scale height, that can be used to perform these calculations. This length scale is simple to calculate from known chemical data and makes reasonable predictions for our dynamical simulations.

  14. 77 FR 64367 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-19

    ... burden associated with money market funds' adoption of certain policies and procedures aimed at ensuring that these funds meet reasonably foreseeable shareholder redemptions (the ``general liquidity... complying with the general liquidity requirement. Approximately 10 money market funds were newly registered...

  15. A Summary of Research in Science Education--1984.

    ERIC Educational Resources Information Center

    Lawson, Anton E.; And Others

    This review covers approximately 300 studies, including journal articles, dissertations, and papers presented at conferences. The studies are organized under these major headings: status surveys; scientific reasoning; elementary school science (student achievement, student conceptions/misconceptions, student curiosity/attitudes, teaching methods,…

  16. Holocaust II?

    ERIC Educational Resources Information Center

    Wolfensberger, Wolf

    1984-01-01

    The author estimates that approximately 200,000 lives of devalued disabled people (including infants and older adults) are taken or abbreviated annually through euthanasia and termination of life-supporting measures. He cites possible reasons for limited public outcry against what he compares with the holocaust. (CL)

  17. SPARC GENERATED CHEMICAL PROPERTIES DATABASE FOR USE IN NATIONAL RISK ASSESSMENTS

    EPA Science Inventory

    The SPARC (Sparc Performs Automated Reasoning in Chemistry) Model was used to provide temperature dependent algorithms used to estimate chemical properties for approximately 200 chemicals of interest to the promulgation of the Hazardous Waste Identification Rule (HWIR) . Proper...

  18. Measuring the willingness to pay user fees for interpretive services at a national forest

    NASA Astrophysics Data System (ADS)

    Goldhor-Wilcock, Barbara Ashley

    An understanding of willingness to pay (WTP) for nonmarket environmental goods is useful for planning and policy, but difficult to determine. WTP for interpretive services was investigated using interviews with 361 participants in guided nature tours. Immediately after the tour, participants were asked to state their WIT for the tour. Responses were predominantly 5 (42%), 2 (14%) and 10 (13%). A predetermined amount was added to the open-ended (OE) WTP offer and respondents were asked if they were willing to pay a larger amount. Acceptance of the larger amount depended strongly on the relative increase over the initial WTP. If the increase was smaller than the initial offer, most respondents agreed, whereas if the increment was larger, most did not agree, suggesting that the initial offer was approximately half of the true WTP. The two WTP questions were used to define lower and upper bounds for each respondent's true WTP. A censored interval regression was used to estimate a WTP distribution with mean 11.30 and median $10.00. The median is twice that of the OE WTP, further suggesting that the OE response understated value by 50 percent. The estimated true WTP distribution and the OE WTP distribution have a weak, but statistically significant, dependence on some demographic, travel, and benefit variables, although these relations have negligible practical significance over the observed range of the variables. To evaluate whether the WTP amounts were based on a true economic tradeoff, respondents were asked to explain their WTP responses. For the initial OE question, 38% gave explanations that could be interpreted as an economic tradeoff, whereas 33% gave reasons that were clearly irrelevant. For the second, dichotomous choice (DC), question, 59% gave reasons suggesting a relevant economic judgement. A DC question may provoke apparently relevant answers, regardless of the underlying reasoning (a majority simply said "it was (not) worth it"). The DC reasoning may also be influenced by the preceding OE question, which provides a comparative base. Combining OE and DC questions in a single survey may encourage relevant reasoning, while also helping to identify the true WTP and consumer surplus.

  19. Overcoming rule-based rigidity and connectionist limitations through massively-parallel case-based reasoning

    NASA Technical Reports Server (NTRS)

    Barnden, John; Srinivas, Kankanahalli

    1990-01-01

    Symbol manipulation as used in traditional Artificial Intelligence has been criticized by neural net researchers for being excessively inflexible and sequential. On the other hand, the application of neural net techniques to the types of high-level cognitive processing studied in traditional artificial intelligence presents major problems as well. A promising way out of this impasse is to build neural net models that accomplish massively parallel case-based reasoning. Case-based reasoning, which has received much attention recently, is essentially the same as analogy-based reasoning, and avoids many of the problems leveled at traditional artificial intelligence. Further problems are avoided by doing many strands of case-based reasoning in parallel, and by implementing the whole system as a neural net. In addition, such a system provides an approach to some aspects of the problems of noise, uncertainty and novelty in reasoning systems. The current neural net system (Conposit), which performs standard rule-based reasoning, is being modified into a massively parallel case-based reasoning version.

  20. Reaction-Diffusion-Delay Model for EPO/TNF-α Interaction in articular cartilage lesion abatement

    PubMed Central

    2012-01-01

    Background Injuries to articular cartilage result in the development of lesions that form on the surface of the cartilage. Such lesions are associated with articular cartilage degeneration and osteoarthritis. The typical injury response often causes collateral damage, primarily an effect of inflammation, which results in the spread of lesions beyond the region where the initial injury occurs. Results and discussion We present a minimal mathematical model based on known mechanisms to investigate the spread and abatement of such lesions. The first case corresponds to the parameter values listed in Table 1, while the second case has parameter values as in Table 2. In particular we represent the "balancing act" between pro-inflammatory and anti-inflammatory cytokines that is hypothesized to be a principal mechanism in the expansion properties of cartilage damage during the typical injury response. We present preliminary results of in vitro studies that confirm the anti-inflammatory activities of the cytokine erythropoietin (EPO). We assume that the diffusion of cytokines determine the spatial behavior of injury response and lesion expansion so that a reaction diffusion system involving chemical species and chondrocyte cell state population densities is a natural way to represent cartilage injury response. We present computational results using the mathematical model showing that our representation is successful in capturing much of the interesting spatial behavior of injury associated lesion development and abatement in articular cartilage. Further, we discuss the use of this model to study the possibility of using EPO as a therapy for reducing the amount of inflammation induced collateral damage to cartilage during the typical injury response. Table 1 Model Parameter Values for Results in Figure 5 Table of Parameter Values Corresponding to Simulations in Figure 5 Parameter Value Units Reason D R 0.1 c m 2 day Determined from [13] D M 0.05 c m 2 day Determined from [13] D F 0.05 c m 2 day Determined from [13] D P 0.005 c m 2 day Determined from [13] δ R 0.01 1 day Approximated δ M 0.6 1 day Approximated δ F 0.6 1 day Approximated δ P 0.0087 1 day Approximated δ U 0.0001 1 day Approximated σ R 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ M 0.00001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ F 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ P 0 micromolar ⋅ c m 2 day ⋅ cells Case with no anti-inflammatory response Λ 10 micromolar Approximated λ R 10 micromolar Approximated λ M 10 micromolar Approximated λ F 10 micromolar Approximated λ P 10 micromolar Approximated α 0 1 day Case with no anti-inflammatory response β 1 100 1 day Approximated Β 2 50 1 day Approximated γ 10 1 day Approximated ν 0.5 1 day Approximated μ S A 1 1 day Approximated μ D N 0.5 1 day Approximated τ 1 0.5 days Taken from [5] τ 2 1 days Taken from [5] Table 2 Model Parameter Values for Results in Figure 6 Table of Parameter Values Corresponding to Simulations in Figure 6 Parameter Value Units Reason D R 0.1 c m 2 day Determined from [13] D M 0.05 c m 2 day Determined from [13] D F 0.05 c m 2 day Determined from [13] DP 0.005 c m 2 day Determined from [13] δ R 0.01 1 day Approximated δ M 0.6 1 day Approximated δ F 0.6 1 day Approximated δ P 0.0087 1 day Approximated δ U 0.0001 1 day Approximated σ R 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ M 0.00001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ F 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ P 0.001 micromolar ⋅ c m 2 day ⋅ cells Approximated Λ 10 micromolar Approximated λ R 10 micromolar Approximated λ M 10 micromolar Approximated λ F 10 micromolar Approximated λ P 10 micromolar Approximated α 10 1 day Approximated β 1 100 1 day Approximated β 2 50 1 day Approximated γ 10 1 day Approximated ν 0.5 1 day Approximated μ S A 1 1 day Approximated μ D N 0.5 1 day Approximated τ 1 0.5 days Taken from [5] τ 2 1 days Taken from [5] Conclusions The mathematical model presented herein suggests that not only are anti-inflammatory cy-tokines, such as EPO necessary to prevent chondrocytes signaled by pro-inflammatory cytokines from entering apoptosis, they may also influence how chondrocytes respond to signaling by pro-inflammatory cytokines. Reviewers This paper has been reviewed by Yang Kuang, James Faeder and Anna Marciniak-Czochra. PMID:22353555

  1. The Mine Safety and Health Administration's criterion threshold value policy increases miners' risk of pneumoconiosis.

    PubMed

    Weeks, James L

    2006-06-01

    The Mine Safety and Health Administration (MSHA) proposes to issue citations for non-compliance with the exposure limit for respirable coal mine dust when measured exposure exceeds the exposure limit with a "high degree of confidence." This criterion threshold value (CTV) is derived from the sampling and analytical error of the measurement method. This policy is based on a combination of statistical and legal reasoning: the one-tailed 95% confidence limit of the sampling method, the apparent principle of due process and a standard of proof analogous to "beyond a reasonable doubt." This policy raises the effective exposure limit, it is contrary to the precautionary principle, it is not a fair sharing of the burden of uncertainty, and it employs an inappropriate standard of proof. Its own advisory committee and NIOSH have advised against this policy. For longwall mining sections, it results in a failure to issue citations for approximately 36% of the measured values that exceed the statutory exposure limit. Citations for non-compliance with the respirable dust standard should be issued for any measure exposure that exceeds the exposure limit.

  2. Driving a car with custom-designed fuzzy inferencing VLSI chips and boards

    NASA Technical Reports Server (NTRS)

    Pin, Francois G.; Watanabe, Yutaka

    1993-01-01

    Vehicle control in a-priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise, incomplete, or unreliable data. For such systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips were developed to add a fuzzy inferencing capability to real-time control systems. All inferencing rules on a chip are processed in parallel, allowing execution of the entire rule base in about 30 microseconds, and therefore, making control of 'reflex-type' of motions envisionable. The use of these boards and the approach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation in a-priori unknown environments are first discussed. Then how the human-like navigation scheme implemented on one of the qualitative inferencing boards was installed on a test-bed platform to investigate two control modes for driving a car in a-priori unknown environments on the basis of sparse and imprecise sensor data is described. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulation results as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver's aid using the new fuzzy inferencing hardware system and some human-like reasoning schemes which may include as little as six elemental behaviors embodied in fourteen qualitative rules.

  3. A Survey of Runners' Attitudes Toward and Experiences With Minimally Shod Running.

    PubMed

    Cohler, Marissa H; Casey, Ellen

    2015-08-01

    To investigate the characteristics, perceptions, motivating factors, experiences, and injury rates of runners who practice minimally shod running. Survey. web-based questionnaire. Five-hundred sixty-six members of the Chicago Area Runner's Association. A link to a 31-question online survey was e-mailed to members of Chicago Area Runner's Association. Questions covered demographic information, use of minimalist-style running shoes (MSRS), injury rates, and change in pain. Use of MSRS, occurrence or improvement of injury/pain, regions of injury/pain, reasons for or for not using MSRS. One-hundred seventy-five (31%) respondents had practiced minimally shod running, and the most common motivating factor was to decrease injuries and/or pain. Fifty-one respondents (29%) suffered an injury or pain while wearing MSRS, with the most common body part involved being the foot. Fifty-four respondents (31%) had an injury that improved after adopting minimally shod running; the most common area involved was the knee. One-hundred twenty respondents (69%) were still using MSRS. Of those who stopped using MSRS, the main reason was development of an injury or pain. The most common reason that respondents have not tried minimally shod running is a fear of developing an injury. This survey-based study demonstrated that the use of MSRS is common, largely as the result of a perception that they may reduce injuries or pain. Reductions and occurrences of injury/pain with minimally shod running were reported in approximately equal numbers. The most common site of reported injury/pain reduction was the knee, whereas the most common reported site of injury/pain occurrence was the foot. Fear of developing pain or injury is the most common reason runners are reluctant to try minimally shod running. Copyright © 2015 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.

    Outdoor sensor-based operation of autonomous robots has revealed to be an extremely challenging problem, mainly because of the difficulties encountered when attempting to represent the many uncertainties which are always present in the real world. These uncertainties are primarily due to sensor imprecisions and unpredictability of the environment, i.e., lack of full knowledge of the environment characteristics and dynamics. Two basic principles, or philosophies, and their associated methodologies are proposed in an attempt to remedy some of these difficulties. The first principle is based on the concept of ``minimal model`` for accomplishing given tasks and proposes to utilize only themore » minimum level of information and precision necessary to accomplish elemental functions of complex tasks. This approach diverges completely from the direction taken by most artificial vision studies which conventionally call for crisp and detailed analysis of every available component in the perception data. The paper will first review the basic concepts of this approach and will discuss its pragmatic feasibility when embodied in a behaviorist framework. The second principle which is proposed deals with implicit representation of uncertainties using Fuzzy Set Theory-based approximations and approximate reasoning, rather than explicit (crisp) representation through calculation and conventional propagation techniques. A framework which merges these principles and approaches is presented, and its application to the problem of sensor-based outdoor navigation of a mobile robot is discussed. Results of navigation experiments with a real car in actual outdoor environments are also discussed to illustrate the feasibility of the overall concept.« less

  5. Overview of psychiatric ethics IV: the method of casuistry.

    PubMed

    Robertson, Michael; Ryan, Christopher; Walter, Garry

    2007-08-01

    The aim of this paper is to describe the method of ethical analysis known as casuistry and consider its merits as a basis of ethical deliberation in psychiatry. Casuistry approximates the legal arguments of common law. It examines ethical dilemmas by adopting a taxonomic approach to 'paradigm' cases, using a technique akin to that of normative analogical reasoning. Casuistry offers a useful method in ethical reasoning through providing a practical means of evaluating the merits of a particular course of action in a particular clinical situation. As a method ethical moral reasoning in psychiatry, casuistry suffers from a paucity of paradigm cases and its failure to fully contextualize ethical dilemmas by relying on common morality theory as its basis.

  6. A statistical test of the stability assumption inherent in empirical estimates of economic depreciation.

    PubMed

    Shriver, K A

    1986-01-01

    Realistic estimates of economic depreciation are required for analyses of tax policy, economic growth and production, and national income and wealth. THe purpose of this paper is to examine the stability assumption underlying the econometric derivation of empirical estimates of economic depreciation for industrial machinery and and equipment. The results suggest that a reasonable stability of economic depreciation rates of decline may exist over time. Thus, the assumption of a constant rate of economic depreciation may be a reasonable approximation for further empirical economic analyses.

  7. High-order tracking differentiator based adaptive neural control of a flexible air-breathing hypersonic vehicle subject to actuators constraints.

    PubMed

    Bu, Xiangwei; Wu, Xiaoyan; Tian, Mingyan; Huang, Jiaqi; Zhang, Rui; Ma, Zhen

    2015-09-01

    In this paper, an adaptive neural controller is exploited for a constrained flexible air-breathing hypersonic vehicle (FAHV) based on high-order tracking differentiator (HTD). By utilizing functional decomposition methodology, the dynamic model is reasonably decomposed into the respective velocity subsystem and altitude subsystem. For the velocity subsystem, a dynamic inversion based neural controller is constructed. By introducing the HTD to adaptively estimate the newly defined states generated in the process of model transformation, a novel neural based altitude controller that is quite simpler than the ones derived from back-stepping is addressed based on the normal output-feedback form instead of the strict-feedback formulation. Based on minimal-learning parameter scheme, only two neural networks with two adaptive parameters are needed for neural approximation. Especially, a novel auxiliary system is explored to deal with the problem of control inputs constraints. Finally, simulation results are presented to test the effectiveness of the proposed control strategy in the presence of system uncertainties and actuators constraints. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Gas-phase geometry optimization of biological molecules as a reasonable alternative to a continuum environment description: fact, myth, or fiction?

    PubMed

    Sousa, Sérgio Filipe; Fernandes, Pedro Alexandrino; Ramos, Maria João

    2009-12-31

    Gas-phase optimization of single biological molecules and of small active-site biological models has become a standard approach in first principles computational enzymology. The important role played by the surrounding environment (solvent, enzyme, both) is normally only accounted for through higher-level single point energy calculations performed using a polarizable continuum model (PCM) and an appropriate dielectric constant with the gas-phase-optimized geometries. In this study we analyze this widely used approximation, by comparing gas-phase-optimized geometries with geometries optimized with different PCM approaches (and considering different dielectric constants) for a representative data set of 20 very important biological molecules--the 20 natural amino acids. A total of 323 chemical bonds and 469 angles present in standard amino acid residues were evaluated. The results show that the use of gas-phase-optimized geometries can in fact be quite a reasonable alternative to the use of the more computationally intensive continuum optimizations, providing a good description of bond lengths and angles for typical biological molecules, even for charged amino acids, such as Asp, Glu, Lys, and Arg. This approximation is particularly successful if the protonation state of the biological molecule could be reasonably described in vacuum, a requirement that was already necessary in first principles computational enzymology.

  9. Hazard ratio estimation and inference in clinical trials with many tied event times.

    PubMed

    Mehrotra, Devan V; Zhang, Yiwei

    2018-06-13

    The medical literature contains numerous examples of randomized clinical trials with time-to-event endpoints in which large numbers of events accrued over relatively short follow-up periods, resulting in many tied event times. A generally common feature across such examples was that the logrank test was used for hypothesis testing and the Cox proportional hazards model was used for hazard ratio estimation. We caution that this common practice is particularly risky in the setting of many tied event times for two reasons. First, the estimator of the hazard ratio can be severely biased if the Breslow tie-handling approximation for the Cox model (the default in SAS and Stata software) is used. Second, the 95% confidence interval for the hazard ratio can include one even when the corresponding logrank test p-value is less than 0.05. To help establish a better practice, with applicability for both superiority and noninferiority trials, we use theory and simulations to contrast Wald and score tests based on well-known tie-handling approximations for the Cox model. Our recommendation is to report the Wald test p-value and corresponding confidence interval based on the Efron approximation. The recommended test is essentially as powerful as the logrank test, the accompanying point and interval estimates of the hazard ratio have excellent statistical properties even in settings with many tied event times, inferential alignment between the p-value and confidence interval is guaranteed, and implementation is straightforward using commonly used software. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Invariant patterns in crystal lattices: Implications for protein folding algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HART,WILLIAM E.; ISTRAIL,SORIN

    2000-06-01

    Crystal lattices are infinite periodic graphs that occur naturally in a variety of geometries and which are of fundamental importance in polymer science. Discrete models of protein folding use crystal lattices to define the space of protein conformations. Because various crystal lattices provide discretizations of the same physical phenomenon, it is reasonable to expect that there will exist invariants across lattices related to fundamental properties of the protein folding process. This paper considers whether performance-guaranteed approximability is such an invariant for HP lattice models. The authors define a master approximation algorithm that has provable performance guarantees provided that a specificmore » sublattice exists within a given lattice. They describe a broad class of crystal lattices that are approximable, which further suggests that approximability is a general property of HP lattice models.« less

  11. Modeling and analysis of solar distributed generation

    NASA Astrophysics Data System (ADS)

    Ortiz Rivera, Eduardo Ivan

    Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum power point tracking algorithms. Finally, several PV power applications will be presented like circuit analysis for a load connected to two different PV arrays, speed control for a do motor connected to a PVM, and a novel single phase photovoltaic inverter system using the Z-source converter.

  12. "Adiabatic-hindered-rotor" treatment of the parahydrogen-water complex.

    PubMed

    Zeng, Tao; Li, Hui; Le Roy, Robert J; Roy, Pierre-Nicholas

    2011-09-07

    Inspired by a recent successful adiabatic-hindered-rotor treatment for parahydrogen pH(2) in CO(2)-H(2) complexes [H. Li, P.-N. Roy, and R. J. Le Roy, J. Chem. Phys. 133, 104305 (2010); H. Li, R. J. Le Roy, P.-N. Roy, and A. R. W. McKellar, Phys. Rev. Lett. 105, 133401 (2010)], we apply the same approximation to the more challenging H(2)O-H(2) system. This approximation reduces the dimension of the H(2)O-H(2) potential from 5D to 3D and greatly enhances the computational efficiency. The global minimum of the original 5D potential is missing from the adiabatic 3D potential for reasons based on solution of the hindered-rotor Schrödinger equation of the pH(2). Energies and wave functions of the discrete rovibrational levels of H(2)O-pH(2) complexes obtained from the adiabatic 3D potential are in good agreement with the results from calculations with the full 5D potential. This comparison validates our approximation, although it is a relatively cruder treatment for pH(2)-H(2)O than it is for pH(2)-CO(2). This adiabatic approximation makes large-scale simulations of H(2)O-pH(2) systems possible via a pairwise additive interaction model in which pH(2) is treated as a point-like particle. The poor performance of the diabatically spherical treatment of pH(2) rotation excludes the possibility of approximating pH(2) as a simple sphere in its interaction with H(2)O. © 2011 American Institute of Physics

  13. Approximate Joint Diagonalization and Geometric Mean of Symmetric Positive Definite Matrices

    PubMed Central

    Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher

    2015-01-01

    We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations. PMID:25919667

  14. Tunneling effects in electromagnetic wave scattering by nonspherical particles: A comparison of the Debye series and physical-geometric optics approximations

    NASA Astrophysics Data System (ADS)

    Bi, Lei; Yang, Ping

    2016-07-01

    The accuracy of the physical-geometric optics (PG-O) approximation is examined for the simulation of electromagnetic scattering by nonspherical dielectric particles. This study seeks a better understanding of the tunneling effect on the phase matrix by employing the invariant imbedding method to rigorously compute the zeroth-order Debye series, from which the tunneling efficiency and the phase matrix corresponding to the diffraction and external reflection are obtained. The tunneling efficiency is shown to be a factor quantifying the relative importance of the tunneling effect over the Fraunhofer diffraction near the forward scattering direction. Due to the tunneling effect, different geometries with the same projected cross section might have different diffraction patterns, which are traditionally assumed to be identical according to the Babinet principle. For particles with a fixed orientation, the PG-O approximation yields the external reflection pattern with reasonable accuracy, but ordinarily fails to predict the locations of peaks and minima in the diffraction pattern. The larger the tunneling efficiency, the worse the PG-O accuracy is at scattering angles less than 90°. If the particles are assumed to be randomly oriented, the PG-O approximation yields the phase matrix close to the rigorous counterpart, primarily due to error cancellations in the orientation-average process. Furthermore, the PG-O approximation based on an electric field volume-integral equation is shown to usually be much more accurate than the Kirchhoff surface integral equation at side-scattering angles, particularly when the modulus of the complex refractive index is close to unity. Finally, tunneling efficiencies are tabulated for representative faceted particles.

  15. Using Computer Simulations for Promoting Model-based Reasoning. Epistemological and Educational Dimensions

    NASA Astrophysics Data System (ADS)

    Develaki, Maria

    2017-11-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and evaluate in a scientific way. This paper aims (a) to contribute to an extended understanding of the nature and pedagogical importance of model-based reasoning and (b) to exemplify how using computer simulations can support students' model-based reasoning. We provide first a background for both scientific reasoning and computer simulations, based on the relevant philosophical views and the related educational discussion. This background suggests that the model-based framework provides an epistemologically valid and pedagogically appropriate basis for teaching scientific reasoning and for helping students develop sounder reasoning and decision-taking abilities and explains how using computer simulations can foster these abilities. We then provide some examples illustrating the use of computer simulations to support model-based reasoning and evaluation activities in the classroom. The examples reflect the procedure and criteria for evaluating models in science and demonstrate the educational advantages of their application in classroom reasoning activities.

  16. Triviality of Quantum Electrodynamics Revisited

    NASA Astrophysics Data System (ADS)

    Djukanovic, D.; Gegelia, J.; Meißner, Ulf-G.

    2018-03-01

    Quantum electrodynamics is often considered to be a trivial theory. This is based on a number of evidences, both numerical and analytical. One of the strong indications for triviality of QED is the existence of the Landau pole for the running coupling. We show that by treating QED as the leading order approximation of an effective field theory and including the next-to-leading order corrections, the Landau pole is removed. We also analyze the cutoff dependence of the bare coupling at two-loop order and conclude that the conjecture, that for reasons of self-consistency, QED needs to be trivial is a mere artefact of the leading order approximation to the corresponding effective field theory. Supported in part by DFG and NSFC through funds provided to the Sino-German CRC 110 “Symmetries and the Emergence of Structure in QCD” National Natural Science Foundation of under Grant No. 11621131001, DFG under Grant No. TRR110, the Georgian Shota Rustaveli National Science Foundation (Grant FR/417/6-100/14) and the Chinese Academy of Sciences President’s International Fellowship Initiative (PIFI) under Grant No. 2017VMA0025

  17. Two copies of the Einstein-Podolsky-Rosen state of light lead to refutation of EPR ideas.

    PubMed

    Rosołek, Krzysztof; Stobińska, Magdalena; Wieśniak, Marcin; Żukowski, Marek

    2015-03-13

    Bell's theorem applies to the normalizable approximations of original Einstein-Podolsky-Rosen (EPR) state. The constructions of the proof require measurements difficult to perform, and dichotomic observables. By noticing the fact that the four mode squeezed vacuum state produced in type II down-conversion can be seen both as two copies of approximate EPR states, and also as a kind of polarization supersinglet, we show a straightforward way to test violations of the EPR concepts with direct use of their state. The observables involved are simply photon numbers at outputs of polarizing beam splitters. Suitable chained Bell inequalities are based on the geometric concept of distance. For a few settings they are potentially a new tool for quantum information applications, involving observables of a nondichotomic nature, and thus of higher informational capacity. In the limit of infinitely many settings we get a Greenberger-Horne-Zeilinger-type contradiction: EPR reasoning points to a correlation, while quantum prediction is an anticorrelation. Violations of the inequalities are fully resistant to multipair emissions in Bell experiments using parametric down-conversion sources.

  18. Design of fuzzy systems using neurofuzzy networks.

    PubMed

    Figueiredo, M; Gomide, F

    1999-01-01

    This paper introduces a systematic approach for fuzzy system design based on a class of neural fuzzy networks built upon a general neuron model. The network structure is such that it encodes the knowledge learned in the form of if-then fuzzy rules and processes data following fuzzy reasoning principles. The technique provides a mechanism to obtain rules covering the whole input/output space as well as the membership functions (including their shapes) for each input variable. Such characteristics are of utmost importance in fuzzy systems design and application. In addition, after learning, it is very simple to extract fuzzy rules in the linguistic form. The network has universal approximation capability, a property very useful in, e.g., modeling and control applications. Here we focus on function approximation problems as a vehicle to illustrate its usefulness and to evaluate its performance. Comparisons with alternative approaches are also included. Both, nonnoisy and noisy data have been studied and considered in the computational experiments. The neural fuzzy network developed here and, consequently, the underlying approach, has shown to provide good results from the accuracy, complexity, and system design points of view.

  19. An approximate methods approach to probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.

    1989-01-01

    A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.

  20. GLASS VISCOSITY AS A FUNCTION OF TEMPERATURE AND COMPOSITION: A MODEL BASED ON ADAM-GIBBS EQUATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hrma, Pavel R.

    2008-07-01

    Within the temperature range and composition region of processing and product forming, the viscosity of commercial and waste glasses spans over 12 orders of magnitude. This paper shows that a generalized Adam-Gibbs relationship reasonably approximates the real behavior of glasses with four temperature-independent parameters of which two are linear functions of the composition vector. The equation is subjected to two constraints, one requiring that the viscosity-temperature relationship approaches the Arrhenius function at high temperatures with a composition-independent pre-exponential factor and the other that the viscosity value is independent of composition at the glass-transition temperature. Several sets of constant coefficients weremore » obtained by fitting the generalized Adam-Gibbs equation to data of two glass families: float glass and Hanford waste glass. Other equations (the Vogel-Fulcher-Tammann equation, original and modified, the Avramov equation, and the Douglass-Doremus equation) were fitted to float glass data series and compared with the Adam-Gibbs equation, showing that Adam-Gibbs glass appears an excellent approximation of real glasses even as compared with other candidate constitutive relations.« less

  1. Problem-based learning: effects on student’s scientific reasoning skills in science

    NASA Astrophysics Data System (ADS)

    Wulandari, F. E.; Shofiyah, N.

    2018-04-01

    This research aimed to develop instructional package of problem-based learning to enhance student’s scientific reasoning from concrete to formal reasoning skills level. The instructional package was developed using the Dick and Carey Model. Subject of this study was instructional package of problem-based learning which was consisting of lesson plan, handout, student’s worksheet, and scientific reasoning test. The instructional package was tried out on 4th semester science education students of Universitas Muhammadiyah Sidoarjo by using the one-group pre-test post-test design. The data of scientific reasoning skills was collected by making use of the test. The findings showed that the developed instructional package reflecting problem-based learning was feasible to be implemented in classroom. Furthermore, through applying the problem-based learning, students could dominate formal scientific reasoning skills in terms of functionality and proportional reasoning, control variables, and theoretical reasoning.

  2. Can the Equivalent Sphere Model Approximate Organ Doses in Space?

    NASA Technical Reports Server (NTRS)

    Lin, Zi-Wei

    2007-01-01

    For space radiation protection it is often useful to calculate dose or dose,equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to. simulate the BFO dose. However, many previous studies have concluded that a 5cm sphere gives very different dose values from the exact BFO values. One study [1] . concludes that a 9 cm sphere is a reasonable approximation for BFO'doses in solar particle event environments. In this study we use a deterministic radiation transport [2] to investigate the reason behind these observations and to extend earlier studies. We take different space radiation environments, including seven galactic cosmic ray environments and six large solar particle events, and calculate the dose and dose equivalent in the skin, eyes and BFO using their thickness distribution functions from the CAM (Computerized Anatomical Man) model [3] The organ doses have been evaluated with a water or aluminum shielding of an areal density from 0 to 20 g/sq cm. We then compare with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we address why the equivalent sphere model is not a good approximation in some cases. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for the eye or the skin. For galactic cosmic rays environments, the equivalent sphere model with an organ-specific constant radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of the eye or the skin, but is unacceptable for the dose of the eye or the skin. The ranges of the radius parameters are also being investigated, and the BFO radius parameters are found to be significantly, larger than 5 cm in all cases, consistent with the conclusion of an earlier study [I]. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and I I cm for the BFO, 3.7 to 4.8 cm for the eye, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose.

  3. Unification of Gauge Couplings in the E{sub 6}SSM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athron, P.; King, S. F.; Luo, R.

    2010-02-10

    We argue that in the two--loop approximation gauge coupling unification in the exceptional supersymmetric standard model (E{sub 6}SSM) can be achieved for any phenomenologically reasonable value of alpha{sub 3}(M{sub Z}) consistent with the experimentally measured central value.

  4. Using Order of Magnitude Calculations to Extend Student Comprehension of Laboratory Data

    ERIC Educational Resources Information Center

    Dean, Rob L.

    2015-01-01

    Author Rob Dean previously published an Illuminations article concerning "challenge" questions that encourage students to think imaginatively with approximate quantities, reasonable assumptions, and uncertain information. This article has promoted some interesting discussion, which has prompted him to present further examples. Examples…

  5. Mathematically Talented Males and Females and Achievement in the High School Sciences.

    ERIC Educational Resources Information Center

    Benbow, Camilla Persson; Minor, Lola L.

    1986-01-01

    Using data on approximately 2,000 students drawn from three talent searches conducted by the Study of Mathematically Precocious Youth, this study investigated the relationship of possible sex differences in science achievement to sex differences in mathematical reasoning ability. (BS)

  6. 10 CFR 431.17 - Determination of efficiency.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... different horsepowers without duplication; (C) The basic models should be of different frame number series... be produced over a reasonable period of time (approximately 180 days), then each unit shall be tested... design may be substituted without requiring additional testing if the represented measures of energy...

  7. Students' Moral Reasoning as Related to Cultural Background and Educational Experience.

    ERIC Educational Resources Information Center

    Bar-Yam, Miriam; And Others

    The relationship between moral development and cultural and educational background is examined. Approximately 120 Israeli youth representing different social classes, sex, religious affiliation, and educational experience were interviewed. The youth interviewed included urban middle and lower class students, Kibbutz-born, Youth Aliyah…

  8. Employer Sponsored Child Care: Issues and Options.

    ERIC Educational Resources Information Center

    Conroyd, S. Danielle

    This presentation describes the child care center at Detroit's Mount Carmel Hospital, a division of the Sisters of Mercy Health Corporation employing approximately 1,550 women. Discussion focuses on reasons for establishing the center, facility acquisition, program details, program management, developmental philosophy, parent involvement, policy…

  9. Computer program analyzes Buckling Of Shells Of Revolution with various wall construction, BOSOR

    NASA Technical Reports Server (NTRS)

    Almroth, B. O.; Bushnell, D.; Sobel, L. H.

    1968-01-01

    Computer program performs stability analyses for a wide class of shells without unduly restrictive approximations. The program uses numerical integration, finite difference of finite element techniques to solve with reasonable accuracy almost any buckling problem for shells exhibiting orthotropic behavior.

  10. Quantitative Assessment of Factors Related to Customer Satisfaction with MoDOT in the Kansas City Area.

    DOT National Transportation Integrated Search

    2008-01-01

    A mailed survey was sent to approximately twenty thousand citizens from District Four (Kansas City Area) residents in order to gather statistical evidence for : supporting or eliminating reasons for the satisfaction discrepancy between Kansas City Ar...

  11. Program for Institutionalized Children, 1974-75.

    ERIC Educational Resources Information Center

    Ramsay, James G.

    This program for institutionalized children, funded under the Elementary Secondary Education Act of 1965, involved approximately 2181 children in 35 institutions in the New York City metropolitan area. Children were institutionalized for a variety of reasons: they were orphaned, neglected, dependent, in need of supervision, or emotionally…

  12. In defense of compilation: A response to Davis' form and content in model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard

    1990-01-01

    In a recent paper entitled 'Form and Content in Model Based Reasoning', Randy Davis argues that model based reasoning research aimed at compiling task specific rules from underlying device models is mislabeled, misguided, and diversionary. Some of Davis' claims are examined and his basic conclusions are challenged about the value of compilation research to the model based reasoning community. In particular, Davis' claim is refuted that model based reasoning is exempt from the efficiency benefits provided by knowledge compilation techniques. In addition, several misconceptions are clarified about the role of representational form in compilation. It is concluded that techniques have the potential to make a substantial contribution to solving tractability problems in model based reasoning.

  13. PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.

    PubMed

    Xia, Jing; Wang, Michelle Yongmei

    Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.

  14. Emulation and design of terahertz reflection-mode confocal scanning microscopy based on virtual pinhole

    NASA Astrophysics Data System (ADS)

    Yang, Yong-fa; Li, Qi

    2014-12-01

    In the practical application of terahertz reflection-mode confocal scanning microscopy, the size of detector pinhole is an important factor that determines the performance of spatial resolution characteristic of the microscopic system. However, the use of physical pinhole brings some inconvenience to the experiment and the adjustment error has a great influence on the experiment result. Through reasonably selecting the parameter of matrix detector virtual pinhole (VPH), it can efficiently approximate the physical pinhole. By using this approach, the difficulty of experimental calibration is reduced significantly. In this article, an imaging scheme of terahertz reflection-mode confocal scanning microscopy that is based on the matrix detector VPH is put forward. The influence of detector pinhole size on the axial resolution of confocal scanning microscopy is emulated and analyzed. Then, the parameter of VPH is emulated when the best axial imaging performance is reached.

  15. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  16. Learning and tuning fuzzy logic controllers through reinforcements.

    PubMed

    Berenji, H R; Khedkar, P

    1992-01-01

    A method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. It is shown that: the generalized approximate-reasoning-based intelligent control (GARIC) architecture learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.

  17. Hybrid neural network and fuzzy logic approaches for rendezvous and capture in space

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Castellano, Timothy

    1991-01-01

    The nonlinear behavior of many practical systems and unavailability of quantitative data regarding the input-output relations makes the analytical modeling of these systems very difficult. On the other hand, approximate reasoning-based controllers which do not require analytical models have demonstrated a number of successful applications such as the subway system in the city of Sendai. These applications have mainly concentrated on emulating the performance of a skilled human operator in the form of linguistic rules. However, the process of learning and tuning the control rules to achieve the desired performance remains a difficult task. Fuzzy Logic Control is based on fuzzy set theory. A fuzzy set is an extension of a crisp set. Crisp sets only allow full membership or no membership at all, whereas fuzzy sets allow partial membership. In other words, an element may partially belong to a set.

  18. Medicare Part D Claims Rejections for Nursing Home Residents, 2006 to 2010

    PubMed Central

    Stevenson, David G.; Keohane, Laura M.; Mitchell, Susan L.; Zarowitz, Barbara J.; Huskamp, Haiden A.

    2013-01-01

    Objectives Much has been written about trends in Medicare Part D formulary design and consumers’ choice of plans, but little is known about the magnitude of claims rejections or their clinical and administrative implications. Our objective was to study the overall rate at which Part D claims are rejected, whether these rates differ across plans, drugs, and medication classes, and how these rejection rates and reasons have evolved over time. Study Design and Methods We performed descriptive analyses of data on paid and rejected Part D claims submitted by 1 large national long-term care pharmacy from 2006 to 2010. In each of the 5 study years, data included approximately 450,000 Medicare beneficiaries living in long-term care settings with approximately 4 million Part D drug claims. Claims rejection rates and reasons for rejection are tabulated for each study year at the plan, drug, and class levels. Results Nearly 1 in 6 drug claims was rejected during the first 5 years of the Medicare Part D program, and this rate has increased over time. Rejection rates and reasons for rejection varied substantially across drug products and Part D plans. Moreover, the reasons for denials evolved over our study period. Coverage has become less of a factor in claims rejections than it was initially and other formulary tools such as drug utilization review, quantity-related coverage limits, and prior authorization are increasingly used to deny claims. Conclusions Examining claims rejection rates can provide important supplemental information to assess plans’ generosity of coverage and to identify potential areas of concern. PMID:23145808

  19. Architectures and economics for pervasive broadband satellite networks

    NASA Technical Reports Server (NTRS)

    Staelin, D. H.; Harvey, R. L.

    1979-01-01

    The size of a satellite network necessary to provide pervasive high-data-rate business communications is estimated, and one possible configuration is described which could interconnect most organizations in the United States. Within an order of magnitude, such a network might reasonably have a capacity equivalent to 10,000 simultaneous 3-Mbps channels, and rely primarily upon a cluster of approximately 3-5 satellites in a single orbital slot. Nominal prices for 3-6 Mbps video conference services might then be approximately $2000 monthly lease charge plus perhaps 70 cents per minute one way.

  20. Simple heuristic for the viscosity of polydisperse hard spheres

    NASA Astrophysics Data System (ADS)

    Farr, Robert S.

    2014-12-01

    We build on the work of Mooney [Colloids Sci. 6, 162 (1951)] to obtain an heuristic analytic approximation to the viscosity of a suspension any size distribution of hard spheres in a Newtonian solvent. The result agrees reasonably well with rheological data on monodispserse and bidisperse hard spheres, and also provides an approximation to the random close packing fraction of polydisperse spheres. The implied packing fraction is less accurate than that obtained by Farr and Groot [J. Chem. Phys. 131(24), 244104 (2009)], but has the advantage of being quick and simple to evaluate.

  1. Asymptotic response of observables from divergent weak-coupling expansions: a fractional-calculus-assisted Padé technique.

    PubMed

    Dhatt, Sharmistha; Bhattacharyya, Kamal

    2012-08-01

    Appropriate constructions of Padé approximants are believed to provide reasonable estimates of the asymptotic (large-coupling) amplitude and exponent of an observable, given its weak-coupling expansion to some desired order. In many instances, however, sequences of such approximants are seen to converge very poorly. We outline here a strategy that exploits the idea of fractional calculus to considerably improve the convergence behavior. Pilot calculations on the ground-state perturbative energy series of quartic, sextic, and octic anharmonic oscillators reveal clearly the worth of our endeavor.

  2. Interaction function of oscillating coupled neurons

    PubMed Central

    Dodla, Ramana; Wilson, Charles J.

    2013-01-01

    Large scale simulations of electrically coupled neuronal oscillators often employ the phase coupled oscillator paradigm to understand and predict network behavior. We study the nature of the interaction between such coupled oscillators using weakly coupled oscillator theory. By employing piecewise linear approximations for phase response curves and voltage time courses, and parameterizing their shapes, we compute the interaction function for all such possible shapes and express it in terms of discrete Fourier modes. We find that reasonably good approximation is achieved with four Fourier modes that comprise of both sine and cosine terms. PMID:24229210

  3. Fusion Propulson System Requirements for an Interstellar Probe

    NASA Technical Reports Server (NTRS)

    Spencer, D. F.

    1963-01-01

    An examination of the engine constraints for a fusion-propelled vehicle indicates that minimum flight times for a probe to a 5 light-year star will be approximately 50 years. The principal restraint on the vehicle is the radiator weight and size necessary to dissipate the heat which enters the chamber walls from the fusion plasma. However, it is interesting, at least theoretically, that the confining magnetic field strength is of reasonable magnitude, 2 to 3 x 10(exp5) gauss, and the confinement time is approximately 0.1 sec.

  4. Research of Litchi Diseases Diagnosis Expertsystem Based on Rbr and Cbr

    NASA Astrophysics Data System (ADS)

    Xu, Bing; Liu, Liqun

    To conquer the bottleneck problems existing in the traditional rule-based reasoning diseases diagnosis system, such as low reasoning efficiency and lack of flexibility, etc.. It researched the integrated case-based reasoning (CBR) and rule-based reasoning (RBR) technology, and put forward a litchi diseases diagnosis expert system (LDDES) with integrated reasoning method. The method use data mining and knowledge obtaining technology to establish knowledge base and case library. It adopt rules to instruct the retrieval and matching for CBR, and use association rule and decision trees algorithm to calculate case similarity.The experiment shows that the method can increase the system's flexibility and reasoning ability, and improve the accuracy of litchi diseases diagnosis.

  5. In defence of model-based inference in phylogeography

    PubMed Central

    Beaumont, Mark A.; Nielsen, Rasmus; Robert, Christian; Hey, Jody; Gaggiotti, Oscar; Knowles, Lacey; Estoup, Arnaud; Panchal, Mahesh; Corander, Jukka; Hickerson, Mike; Sisson, Scott A.; Fagundes, Nelson; Chikhi, Lounès; Beerli, Peter; Vitalis, Renaud; Cornuet, Jean-Marie; Huelsenbeck, John; Foll, Matthieu; Yang, Ziheng; Rousset, Francois; Balding, David; Excoffier, Laurent

    2017-01-01

    Recent papers have promoted the view that model-based methods in general, and those based on Approximate Bayesian Computation (ABC) in particular, are flawed in a number of ways, and are therefore inappropriate for the analysis of phylogeographic data. These papers further argue that Nested Clade Phylogeographic Analysis (NCPA) offers the best approach in statistical phylogeography. In order to remove the confusion and misconceptions introduced by these papers, we justify and explain the reasoning behind model-based inference. We argue that ABC is a statistically valid approach, alongside other computational statistical techniques that have been successfully used to infer parameters and compare models in population genetics. We also examine the NCPA method and highlight numerous deficiencies, either when used with single or multiple loci. We further show that the ages of clades are carelessly used to infer ages of demographic events, that these ages are estimated under a simple model of panmixia and population stationarity but are then used under different and unspecified models to test hypotheses, a usage the invalidates these testing procedures. We conclude by encouraging researchers to study and use model-based inference in population genetics. PMID:29284924

  6. Quantitative scanning thermal microscopy of ErAs/GaAs superlattice structures grown by molecular beam epitaxy

    NASA Astrophysics Data System (ADS)

    Park, K. W.; Nair, H. P.; Crook, A. M.; Bank, S. R.; Yu, E. T.

    2013-02-01

    A proximal probe-based quantitative measurement of thermal conductivity with ˜100-150 nm lateral and vertical spatial resolution has been implemented. Measurements on an ErAs/GaAs superlattice structure grown by molecular beam epitaxy with 3% volumetric ErAs content yielded thermal conductivity at room temperature of 9 ± 2 W/m K, approximately five times lower than that for GaAs. Numerical modeling of phonon scattering by ErAs nanoparticles yielded thermal conductivities in reasonable agreement with those measured experimentally and provides insight into the potential influence of nanoparticle shape on phonon scattering. Measurements of wedge-shaped samples created by focused ion beam milling provide direct confirmation of depth resolution achieved.

  7. Metabolic control analysis using transient metabolite concentrations. Determination of metabolite concentration control coefficients.

    PubMed Central

    Delgado, J; Liao, J C

    1992-01-01

    The methodology previously developed for determining the Flux Control Coefficients [Delgado & Liao (1992) Biochem. J. 282, 919-927] is extended to the calculation of metabolite Concentration Control Coefficients. It is shown that the transient metabolite concentrations are related by a few algebraic equations, attributed to mass balance, stoichiometric constraints, quasi-equilibrium or quasi-steady states, and kinetic regulations. The coefficients in these relations can be estimated using linear regression, and can be used to calculate the Control Coefficients. The theoretical basis and two examples are discussed. Although the methodology is derived based on the linear approximation of enzyme kinetics, it yields reasonably good estimates of the Control Coefficients for systems with non-linear kinetics. PMID:1497632

  8. Comparison of the Chebyshev Method and the Generalized Crank-Nicholson Method for time Propagation in Quantum Mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Formanek, Martin; Vana, Martin; Houfek, Karel

    2010-09-30

    We compare efficiency of two methods for numerical solution of the time-dependent Schroedinger equation, namely the Chebyshev method and the recently introduced generalized Crank-Nicholson method. As a testing system the free propagation of a particle in one dimension is used. The space discretization is based on the high-order finite diferences to approximate accurately the kinetic energy operator in the Hamiltonian. We show that the choice of the more effective method depends on how many wave functions must be calculated during the given time interval to obtain relevant and reasonably accurate information about the system, i.e. on the choice of themore » time step.« less

  9. LES-ODT Simulations of Turbulent Reacting Shear Layers

    NASA Astrophysics Data System (ADS)

    Hoffie, Andreas; Echekki, Tarek

    2012-11-01

    Large-eddy simulations (LES) combined with the one-dimensional turbulence (ODT) simulations of a spatially developing turbulent reacting shear layer with heat release and high Reynolds numbers were conducted and compared to results from direct numerical simulations (DNS) of the same configuration. The LES-ODT approach is based on LES solutions for momentum on a coarse grid and solutions for momentum and reactive scalars on a fine ODT grid, which is embedded in the LES computational domain. The shear layer is simulated with a single-step, second-order reaction with an Arrhenius reaction rate. The transport equations are solved using a low Mach number approximation. The LES-ODT simulations yield reasonably accurate predictions of turbulence and passive/reactive scalars' statistics compared to DNS results.

  10. Icon arrays help younger children's proportional reasoning.

    PubMed

    Ruggeri, Azzurra; Vagharchakian, Laurianne; Xu, Fei

    2018-06-01

    We investigated the effects of two context variables, presentation format (icon arrays or numerical frequencies) and time limitation (limited or unlimited time), on the proportional reasoning abilities of children aged 7 and 10 years, as well as adults. Participants had to select, between two sets of tokens, the one that offered the highest likelihood of drawing a gold token, that is, the set of elements with the greater proportion of gold tokens. Results show that participants performed better in the unlimited time condition. Moreover, besides a general developmental improvement in accuracy, our results show that younger children performed better when proportions were presented as icon arrays, whereas older children and adults were similarly accurate in the two presentation format conditions. Statement of contribution What is already known on this subject? There is a developmental improvement in proportional reasoning accuracy. Icon arrays facilitate reasoning in adults with low numeracy. What does this study add? Participants were more accurate when they were given more time to make the proportional judgement. Younger children's proportional reasoning was more accurate when they were presented with icon arrays. Proportional reasoning abilities correlate with working memory, approximate number system, and subitizing skills. © 2018 The British Psychological Society.

  11. Reasons for low influenza vaccination coverage – a cross-sectional survey in Poland

    PubMed Central

    Kardas, Przemyslaw; Zasowska, Anna; Dec, Joanna; Stachurska, Magdalena

    2011-01-01

    Aim To assess the reasons for low influenza vaccination coverage in Poland, including knowledge of influenza and attitudes toward influenza vaccination. Methods This was a cross-sectional, anonymous, self-administered survey in primary care patients in Lodzkie voivodship (central Poland). The study participants were adults who visited their primary care physicians for various reasons from January 1 to April 30, 2007. Results Six hundred and forty participants completed the survey. In 12 months before the study, 20.8% participants had received influenza vaccination. The most common reasons listed by those who had not been vaccinated were good health (27.6%), lack of trust in vaccination effectiveness (16.8%), and the cost of vaccination (9.7%). The most common source of information about influenza vaccination were primary care physicians (46.6%). Despite reasonably good knowledge of influenza, as many as approximately 20% of participants could not point out any differences between influenza and other viral respiratory tract infections. Conclusions The main reasons for low influenza vaccination coverage in Poland were patients’ misconceptions and the cost of vaccination. Therefore, free-of-charge vaccination and more effective informational campaigns are needed, with special focus on high-risk groups. PMID:21495194

  12. Soils of Walker Branch Watershed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lietzke, D.A.

    1994-03-01

    The soil survey of Walker Branch Watershed (WBW) utilized the most up-to-date knowledge of soils, geology, and geohydrology in building the soils data base needed to reinterpret past research and to begin new research in the watershed. The soils of WBW were also compared with soils mapped elsewhere along Chestnut Ridge on the Oak Ridge Reservation to (1) establish whether knowledge obtained elsewhere could be used within the watershed, (2) determine whether there were any soils restricted to the watershed, and (3) evaluate geologic formation lateral variability. Soils, surficial geology, and geomorphology were mapped at a scale of 1:1,200 usingmore » a paper base map having 2-ft contour intervals. Most of the contours seemed to reasonably represent actual landform configurations, except for dense wooded areas. For example, the very large dolines or sinkholes were shown on the contour base map, but numerous smaller ones were not. In addition, small drainageways and gullies were often not shown. These often small but important features were located approximately as soil mapping progressed.« less

  13. Assignment of channels and polarisations in a broadcasting satellite service environment

    NASA Astrophysics Data System (ADS)

    Fortes, J. M. P.

    1986-07-01

    In the process of synthesizing a satellite communications plan, a large number of possible configurations has to be analyzed in a short amount of time. An important part of the process concerns the allocation of channels and polarizations to the various systems. It is, of course, desirable to make these allocations based on the aggregate carrier/interference ratios, but this needs a considerable amount of time, and for this reason the single-entry carrier/interference criterion is usually employed. The paper presents an integer programming model based on an approximate evaluation of the aggregate carrier/interference ratios, which is fast enough to justify its application in the synthesis process. It was developed to help the elaboration of a downlink plan for the broadcasting satellite service (BSS) of North, Central, and South America. The official software package of the 1983 Administrative Radio Conference (RARC 83), responsible for the planning of the BSS in region 2, contains a routine based on this model.

  14. Tunable d-Limonene Permeability in Starch-Based Nanocomposite Films Reinforced by Cellulose Nanocrystals.

    PubMed

    Liu, Siyuan; Li, Xiaoxi; Chen, Ling; Li, Lin; Li, Bing; Zhu, Jie

    2018-01-31

    In order to control d-limonene permeability, cellulose nanocrystals (CNC) were used to regulate starch-based film multiscale structures. The effect of sphere-like cellulose nanocrystal (CS) and rod-like cellulose nanocrystal (CR) on starch molecular interaction, short-range molecular conformation, crystalline structure, and micro-ordered aggregated region structure were systematically discussed. CNC aspect ratio and content were proved to be independent variables to control d-limonene permeability via film-structure regulation. New hydrogen bonding formation and increased hydroxypropyl starch (HPS) relative crystallinity could be the reason for the lower d-limonene permeability compared with tortuous path model approximation. More hydrogen bonding formation, higher HPS relative crystallinity and larger size of micro-ordered aggregated region in CS0.5 and CR2 could explain the lower d-limonene permeability than CS2 and CR0.5, respectively. This study provided new insight for the control of the flavor release from starch-based films, which favored its application in biodegradable food packaging and flavor encapsulation.

  15. One-dimensional GIS-based model compared with a two-dimensional model in urban floods simulation.

    PubMed

    Lhomme, J; Bouvier, C; Mignot, E; Paquier, A

    2006-01-01

    A GIS-based one-dimensional flood simulation model is presented and applied to the centre of the city of Nîmes (Gard, France), for mapping flow depths or velocities in the streets network. The geometry of the one-dimensional elements is derived from the Digital Elevation Model (DEM). The flow is routed from one element to the next using the kinematic wave approximation. At the crossroads, the flows in the downstream branches are computed using a conceptual scheme. This scheme was previously designed to fit Y-shaped pipes junctions, and has been modified here to fit X-shaped crossroads. The results were compared with the results of a two-dimensional hydrodynamic model based on the full shallow water equations. The comparison shows that good agreements can be found in the steepest streets of the study zone, but differences may be important in the other streets. Some reasons that can explain the differences between the two models are given and some research possibilities are proposed.

  16. The spatial distribution of fixed mutations within genes coding for proteins

    NASA Technical Reports Server (NTRS)

    Holmquist, R.; Goodman, M.; Conroy, T.; Czelusniak, J.

    1983-01-01

    An examination has been conducted of the extensive amino acid sequence data now available for five protein families - the alpha crystallin A chain, myoglobin, alpha and beta hemoglobin, and the cytochromes c - with the goal of estimating the true spatial distribution of base substitutions within genes that code for proteins. In every case the commonly used Poisson density failed to even approximate the experimental pattern of base substitution. For the 87 species of beta hemoglobin examined, for example, the probability that the observed results were from a Poisson process was the minuscule 10 to the -44th. Analogous results were obtained for the other functional families. All the data were reasonably, but not perfectly, described by the negative binomial density. In particular, most of the data were described by one of the very simple limiting forms of this density, the geometric density. The implications of this for evolutionary inference are discussed. It is evident that most estimates of total base substitutions between genes are badly in need of revision.

  17. An improved method for predicting the effects of flight on jet mixing noise

    NASA Technical Reports Server (NTRS)

    Stone, J. R.

    1979-01-01

    The NASA method (1976) for predicting the effects of flight on jet mixing noise was improved. The earlier method agreed reasonably well with experimental flight data for jet velocities up to about 520 m/sec (approximately 1700 ft/sec). The poorer agreement at high jet velocities appeared to be due primarily to the manner in which supersonic convection effects were formulated. The purely empirical supersonic convection formulation of the earlier method was replaced by one based on theoretical considerations. Other improvements of an empirical nature included were based on model-jet/free-jet simulated flight tests. The revised prediction method is presented and compared with experimental data obtained from the Bertin Aerotrain with a J85 engine, the DC-10 airplane with JT9D engines, and the DC-9 airplane with refanned JT8D engines. It is shown that the new method agrees better with the data base than a recently proposed SAE method.

  18. On-Orbit Collision Hazard Analysis in Low Earth Orbit Using the Poisson Probability Distribution (Version 1.0)

    DOT National Transportation Integrated Search

    1992-08-26

    This document provides the basic information needed to estimate a general : probability of collision in Low Earth Orbit (LEO). Although the method : described in this primer is a first order approximation, its results are : reasonable. Furthermore, t...

  19. Genome-wide association with delayed puberty in swine

    USDA-ARS?s Scientific Manuscript database

    An improvement in the proportion of gilts entering the herd that farrow a litter would increase overall herd performance and profitability. A significant proportion (10-30%) of gilts that enter the herd never farrow a litter; reproductive reasons account for approximately a third of gilt removals, w...

  20. Numerical approximation abilities correlate with and predict informal but not formal mathematics abilities

    PubMed Central

    Libertus, Melissa E.; Feigenson, Lisa; Halberda, Justin

    2013-01-01

    Previous research has found a relationship between individual differences in children’s precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the present study we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of two years. Additionally, at the last time point, we tested children’s informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3; Ginsburg & Baroody, 2003). We found that children’s numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned, non-symbolic system of quantity representation and the system of mathematical reasoning that children come to master through instruction. PMID:24076381

  1. Numerical approximation abilities correlate with and predict informal but not formal mathematics abilities.

    PubMed

    Libertus, Melissa E; Feigenson, Lisa; Halberda, Justin

    2013-12-01

    Previous research has found a relationship between individual differences in children's precision when nonverbally approximating quantities and their school mathematics performance. School mathematics performance emerges from both informal (e.g., counting) and formal (e.g., knowledge of mathematics facts) abilities. It remains unknown whether approximation precision relates to both of these types of mathematics abilities. In the current study, we assessed the precision of numerical approximation in 85 3- to 7-year-old children four times over a span of 2years. In addition, at the final time point, we tested children's informal and formal mathematics abilities using the Test of Early Mathematics Ability (TEMA-3). We found that children's numerical approximation precision correlated with and predicted their informal, but not formal, mathematics abilities when controlling for age and IQ. These results add to our growing understanding of the relationship between an unlearned nonsymbolic system of quantity representation and the system of mathematics reasoning that children come to master through instruction. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szczurek, Antoni; University of Rzeszów; Cisek, Anna

    We discuss production of four jets pp → jjjjX with at least two jets with large rapidity separation in proton-proton collisions at the LHC through the mechanism of double-parton scattering (DPS). The cross section is calculated in a factorizaed approximation. Each hard subprocess is calculated in LO collinear approximation. The LO pQCD calculations are shown to give a reasonably good descritption of CMS and ATLAS data on inclusive jet production. It is shown that relative contribution of DPS is growing with increasing rapidity distance between the most remote jets, center-of-mass energy and with decreasing (mini)jet transverse momenta. We show alsomore » result for angular azimuthal dijet correlations calculated in the framework of k{sub t} -factorization approximation.« less

  3. Electronic properties of excess Cr at Fe site in FeCr{sub 0.02}Se alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Sandeep, E-mail: sandeepk.iitb@gmail.com; Singh, Prabhakar P.

    2015-06-24

    We have studied the effect of substitution of transition-metal chromium (Cr) in excess on Fe sub-lattice in the electronic structure of iron-selenide alloys, FeCr{sub 0.02}Se. In our calculations, we used Korringa-Kohn-Rostoker coherent potential approximation method in the atomic sphere approximation (KKR-ASA-CPA). We obtained different band structure of this alloy with respect to the parent FeSe and this may be reason of changing their superconducting properties. We did unpolarized calculations for FeCr{sub 0.02}Se alloy in terms of density of states (DOS) and Fermi surfaces. The local density approximation (LDA) is used in terms of exchange correlation potential.

  4. Validity of the two-level approximation in the interaction of few-cycle light pulses with atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng Jing; Zhou Jianying

    2003-04-01

    The validity of the two-level approximation (TLA) in the interaction of atoms with few-cycle light pulses is studied by investigating a simple (V)-type three-level atom model. Even the transition frequency between the ground state and the third level is far away from the spectrum of the pulse; this additional transition can make the TLA inaccuracy. For a sufficiently large transition frequency or a weak coupling between the ground state and the third level, the TLA is a reasonable approximation and can be used safely. When decreasing the pulse width or increasing the pulse area, the TLA will give rise tomore » non-negligible errors compared with the precise results.« less

  5. Validity of the two-level approximation in the interaction of few-cycle light pulses with atoms

    NASA Astrophysics Data System (ADS)

    Cheng, Jing; Zhou, Jianying

    2003-04-01

    The validity of the two-level approximation (TLA) in the interaction of atoms with few-cycle light pulses is studied by investigating a simple V-type three-level atom model. Even the transition frequency between the ground state and the third level is far away from the spectrum of the pulse; this additional transition can make the TLA inaccuracy. For a sufficiently large transition frequency or a weak coupling between the ground state and the third level, the TLA is a reasonable approximation and can be used safely. When decreasing the pulse width or increasing the pulse area, the TLA will give rise to non-negligible errors compared with the precise results.

  6. Assessment of discrepancies between bottom-up and regional emission inventories in Norwegian urban areas

    NASA Astrophysics Data System (ADS)

    López-Aparicio, Susana; Guevara, Marc; Thunis, Philippe; Cuvelier, Kees; Tarrasón, Leonor

    2017-04-01

    This study shows the capabilities of a benchmarking system to identify inconsistencies in emission inventories, and to evaluate the reason behind discrepancies as a mean to improve both bottom-up and downscaled emission inventories. Fine scale bottom-up emission inventories for seven urban areas in Norway are compared with three regional emission inventories, EC4MACS, TNO_MACC-II and TNO_MACC-III, downscaled to the same areas. The comparison shows discrepancies in nitrogen oxides (NOx) and particulate matter (PM2.5 and PM10) when evaluating both total and sectorial emissions. The three regional emission inventories underestimate NOx and PM10 traffic emissions by approximately 20-80% and 50-90%, respectively. The main reasons for the underestimation of PM10 emissions from traffic in the regional inventories are related to non-exhaust emissions due to resuspension, which are included in the bottom-up emission inventories but are missing in the official national emissions, and therefore in the downscaled regional inventories. The benchmarking indicates that the most probable reason behind the underestimation of NOx traffic emissions by the regional inventories is the activity data. The fine scale NOx traffic emissions from bottom-up inventories are based on the actual traffic volume at the road link and are much higher than the NOx emissions downscaled from national estimates based on fuel sales and based on population for the urban areas. We have identified important discrepancies in PM2.5 emissions from wood burning for residential heating among all the inventories. These discrepancies are associated with the assumptions made for the allocation of emissions. In the EC4MACs inventory, such assumptions imply high underestimation of PM2.5 emissions from the residential combustion sector in urban areas, which ranges from 40 to 90% compared with the bottom-up inventories. The study shows that in three of the seven Norwegian cities there is need for further improvement of the emission inventories.

  7. Rule-based reasoning is fast and belief-based reasoning can be slow: Challenging current explanations of belief-bias and base-rate neglect.

    PubMed

    Newman, Ian R; Gibb, Maia; Thompson, Valerie A

    2017-07-01

    It is commonly assumed that belief-based reasoning is fast and automatic, whereas rule-based reasoning is slower and more effortful. Dual-Process theories of reasoning rely on this speed-asymmetry explanation to account for a number of reasoning phenomena, such as base-rate neglect and belief-bias. The goal of the current study was to test this hypothesis about the relative speed of belief-based and rule-based processes. Participants solved base-rate problems (Experiment 1) and conditional inferences (Experiment 2) under a challenging deadline; they then gave a second response in free time. We found that fast responses were informed by rules of probability and logical validity, and that slow responses incorporated belief-based information. Implications for Dual-Process theories and future research options for dissociating Type I and Type II processes are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Is there a role for assent or dissent in animal research?

    PubMed

    Kantin, Holly; Wendler, David

    2015-10-01

    Current regulations and widely accepted principles for animal research focus on minimizing the burdens and harms of research on animals. However, these regulations and principles do not consider a possible role for assent or dissent in animal research. Should investigators solicit the assent or respect the dissent of animals who are used in research, and, if so, under what circumstances? In this article we pursue this question and outline the relevant issues that bear on the answer. We distinguish two general reasons for respecting the preferences of research participants regarding whether they participate in research-welfare-based reasons and agency-based reasons. We argue that there are welfare-based reasons for researchers to consider, and in some cases respect, the dissent of all animals used in research. After providing a brief account of the nature of agency-based reasons, we argue that there is good reason to think that these reasons apply to at least chimpanzees. We argue that there is an additional reason for researchers to respect the dissent-and, when possible, solicit the assent-of any animal to whom agency-based reasons apply.

  9. Feasibility of Self-Reflection as a Tool to Balance Clinical Reasoning Strategies

    ERIC Educational Resources Information Center

    Sibbald, Matthew; de Bruin, Anique B. H.

    2012-01-01

    Clinicians are believed to use two predominant reasoning strategies: system 1 based pattern recognition, and system 2 based analytical reasoning. Balancing these cognitive reasoning strategies is widely believed to reduce diagnostic error. However, clinicians approach different problems with different reasoning strategies. This study explores…

  10. Improving Reasoning Skills in Secondary History Education by Working Memory Training

    ERIC Educational Resources Information Center

    Ariës, Roel Jacobus; Groot, Wim; van den Brink, Henriette Maassen

    2015-01-01

    Secondary school pupils underachieve in tests in which reasoning abilities are required. Brain-based training of working memory (WM) may improve reasoning abilities. In this study, we use a brain-based training programme based on historical content to enhance reasoning abilities in history courses. In the first experiment, a combined intervention…

  11. Caught on Video! Using Handheld Digital Video Cameras to Support Evidence-Based Reasoning

    ERIC Educational Resources Information Center

    Lottero-Perdue, Pamela S.; Nealy, Jennifer; Roland, Christine; Ryan, Amy

    2011-01-01

    Engaging elementary students in evidence-based reasoning is an essential aspect of science and engineering education. Evidence-based reasoning involves students making claims (i.e., answers to questions, or solutions to problems), providing evidence to support those claims, and articulating their reasoning to connect the evidence to the claim. In…

  12. Analytical Phase Equilibrium Function for Mixtures Obeying Raoult's and Henry's Laws

    NASA Astrophysics Data System (ADS)

    Hayes, Robert

    When a mixture of two substances exists in both the liquid and gas phase at equilibrium, Raoults and Henry's laws (ideal solution and ideal dilute solution approximations) can be used to estimate the gas and liquid mole fractions at the extremes of either very little solute or solvent. By assuming that a cubic polynomial can reasonably approximate the intermediate values to these extremes as a function of mole fraction, the cubic polynomial is solved and presented. A closed form equation approximating the pressure dependence on mole fraction of the constituents is thereby obtained. As a first approximation, this is a very simple and potentially useful means to estimate gas and liquid mole fractions of equilibrium mixtures. Mixtures with an azeotrope require additional attention if this type of approach is to be utilized. This work supported in part by federal Grant NRC-HQ-84-14-G-0059.

  13. Angular momentum projection for a Nilsson mean-field plus pairing model

    NASA Astrophysics Data System (ADS)

    Wang, Yin; Pan, Feng; Launey, Kristina D.; Luo, Yan-An; Draayer, J. P.

    2016-06-01

    The angular momentum projection for the axially deformed Nilsson mean-field plus a modified standard pairing (MSP) or the nearest-level pairing (NLP) model is proposed. Both the exact projection, in which all intrinsic states are taken into consideration, and the approximate projection, in which only intrinsic states with K = 0 are taken in the projection, are considered. The analysis shows that the approximate projection with only K = 0 intrinsic states seems reasonable, of which the configuration subspace considered is greatly reduced. As simple examples for the model application, low-lying spectra and electromagnetic properties of 18O and 18Ne are described by using both the exact and approximate angular momentum projection of the MSP or the NLP, while those of 20Ne and 24Mg are described by using the approximate angular momentum projection of the MSP or NLP.

  14. Correlational and thermodynamic properties of finite-temperature electron liquids in the hypernetted-chain approximation.

    PubMed

    Tanaka, Shigenori

    2016-12-07

    Correlational and thermodynamic properties of homogeneous electron liquids at finite temperatures are theoretically analyzed in terms of dielectric response formalism with the hypernetted-chain (HNC) approximation and its modified version. The static structure factor and the local-field correction to describe the strong Coulomb-coupling effects beyond the random-phase approximation are self-consistently calculated through solution to integral equations in the paramagnetic (spin unpolarized) and ferromagnetic (spin polarized) states. In the ground state with the normalized temperature θ=0, the present HNC scheme well reproduces the exchange-correlation energies obtained by quantum Monte Carlo (QMC) simulations over the whole fluid phase (the coupling constant r s ≤100), i.e., within 1% and 2% deviations from putative best QMC values in the paramagnetic and ferromagnetic states, respectively. As compared with earlier studies based on the Singwi-Tosi-Land-Sjölander and modified convolution approximations, some improvements on the correlation energies and the correlation functions including the compressibility sum rule are found in the intermediate to strong coupling regimes. When applied to the electron fluids at intermediate Fermi degeneracies (θ≈1), the static structure factors calculated in the HNC scheme show good agreements with the results obtained by the path integral Monte Carlo (PIMC) simulation, while a small negative region in the radial distribution function is observed near the origin, which may be associated with a slight overestimation for the exchange-correlation hole in the HNC approximation. The interaction energies are calculated for various combinations of density and temperature parameters ranging from strong to weak degeneracy and from weak to strong coupling, and the HNC values are then parametrized as functions of r s and θ. The HNC exchange-correlation free energies obtained through the coupling-constant integration show reasonable agreements with earlier results including the PIMC-based fitting over the whole fluid region at finite degeneracies in the paramagnetic state. In contrast, a systematic difference between the HNC and PIMC results is observed in the ferromagnetic state, which suggests a necessity of further studies on the exchange-correlation free energies from both aspects of analytical theory and simulation.

  15. Exchange potential from the common energy denominator approximation for the Kohn-Sham Green's function: Application to (hyper)polarizabilities of molecular chains

    NASA Astrophysics Data System (ADS)

    Grüning, M.; Gritsenko, O. V.; Baerends, E. J.

    2002-04-01

    An approximate Kohn-Sham (KS) exchange potential vxσCEDA is developed, based on the common energy denominator approximation (CEDA) for the static orbital Green's function, which preserves the essential structure of the density response function. vxσCEDA is an explicit functional of the occupied KS orbitals, which has the Slater vSσ and response vrespσCEDA potentials as its components. The latter exhibits the characteristic step structure with "diagonal" contributions from the orbital densities |ψiσ|2, as well as "off-diagonal" ones from the occupied-occupied orbital products ψiσψj(≠1)σ*. Comparison of the results of atomic and molecular ground-state CEDA calculations with those of the Krieger-Li-Iafrate (KLI), exact exchange (EXX), and Hartree-Fock (HF) methods show, that both KLI and CEDA potentials can be considered as very good analytical "closure approximations" to the exact KS exchange potential. The total CEDA and KLI energies nearly coincide with the EXX ones and the corresponding orbital energies ɛiσ are rather close to each other for the light atoms and small molecules considered. The CEDA, KLI, EXX-ɛiσ values provide the qualitatively correct order of ionizations and they give an estimate of VIPs comparable to that of the HF Koopmans' theorem. However, the additional off-diagonal orbital structure of vxσCEDA appears to be essential for the calculated response properties of molecular chains. KLI already considerably improves the calculated (hyper)polarizabilities of the prototype hydrogen chains Hn over local density approximation (LDA) and standard generalized gradient approximations (GGAs), while the CEDA results are definitely an improvement over the KLI ones. The reasons of this success are the specific orbital structures of the CEDA and KLI response potentials, which produce in an external field an ultranonlocal field-counteracting exchange potential.

  16. Analyzing the errors of DFT approximations for compressed water systems

    NASA Astrophysics Data System (ADS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-07-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mEh ≃ 15 meV/monomer for the liquid and the clusters.

  17. Analyzing the errors of DFT approximations for compressed water systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfè, D.; London Centre for Nanotechnology, UCL, London WC1H 0AH; Thomas Young Centre, UCL, London WC1H 0AH

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm{sup 3} where the experimental pressure is 15 kilobars; second, thermal samples of compressed watermore » clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE{sub h} ≃ 15 meV/monomer for the liquid and the clusters.« less

  18. Nurses' behaviour regarding CPR and the theories of reasoned action and planned behaviour.

    PubMed

    Dwyer, Trudy; Mosel Williams, Leonie

    2002-01-01

    Cardiopulmonary resuscitation (CPR) has been used in hospitals for approximately 40 years. Nurses are generally the first responders to a cardiac arrest and initiate basic life support while waiting for the advanced cardiac life support team to arrive. Speed and competence of the first responder are factors contributing to the initial survival of a person following a cardiac arrest. Attitudes of individual nurses may influence the speed and level of involvement in true emergency situations. This paper uses the theories of reasoned action and planned behaviour to examine some behavioural issues with CPR involvement.

  19. Air pollution dispersion models for human exposure predictions in London.

    PubMed

    Beevers, Sean D; Kitwiroon, Nutthida; Williams, Martin L; Kelly, Frank J; Ross Anderson, H; Carslaw, David C

    2013-01-01

    The London household survey has shown that people travel and are exposed to air pollutants differently. This argues for human exposure to be based upon space-time-activity data and spatio-temporal air quality predictions. For the latter, we have demonstrated the role that dispersion models can play by using two complimentary models, KCLurban, which gives source apportionment information, and Community Multi-scale Air Quality Model (CMAQ)-urban, which predicts hourly air quality. The KCLurban model is in close agreement with observations of NO(X), NO(2) and particulate matter (PM)(10/2.5), having a small normalised mean bias (-6% to 4%) and a large Index of Agreement (0.71-0.88). The temporal trends of NO(X) from the CMAQ-urban model are also in reasonable agreement with observations. Spatially, NO(2) predictions show that within 10's of metres of major roads, concentrations can range from approximately 10-20 p.p.b. up to 70 p.p.b. and that for PM(10/2.5) central London roadside concentrations are approximately double the suburban background concentrations. Exposure to different PM sources is important and we predict that brake wear-related PM(10) concentrations are approximately eight times greater near major roads than at suburban background locations. Temporally, we have shown that average NO(X) concentrations close to roads can range by a factor of approximately six between the early morning minimum and morning rush hour maximum periods. These results present strong arguments for the hybrid exposure model under development at King's and, in future, for in-building models and a model for the London Underground.

  20. Intracellular singlet oxygen photosensitizers: on the road to solving the problems of sensitizer degradation, bleaching and relocalization.

    PubMed

    da Silva, Elsa F F; Pimenta, Frederico M; Pedersen, Brian W; Blaikie, Frances H; Bosio, Gabriela N; Breitenbach, Thomas; Westberg, Michael; Bregnhøj, Mikkel; Etzerodt, Michael; Arnaut, Luis G; Ogilby, Peter R

    2016-02-01

    Selected singlet oxygen photosensitizers have been examined from the perspective of obtaining a molecule that is sufficiently stable under conditions currently employed to study singlet oxygen behavior in single mammalian cells. Reasonable predictions about intracellular sensitizer stability can be made based on solution phase experiments that approximate the intracellular environment (e.g., solutions containing proteins). Nevertheless, attempts to construct a stable sensitizer based solely on the expected reactivity of a given functional group with singlet oxygen are generally not sufficient for experiments in cells; it is difficult to construct a suitable chromophore that is impervious to all of the secondary and/or competing degradative processes that are present in the intracellular environment. On the other hand, prospects are reasonably positive when one considers the use of a sensitizer encapsulated in a specific protein; the local environment of the chromophore is controlled, degradation as a consequence of bimolecular reactions can be mitigated, and genetic engineering can be used to localize the encapsulated sensitizer in a given cellular domain. Also, the option of directly exciting oxygen in sensitizer-free experiments provides a useful complementary tool. These latter systems bode well with respect to obtaining more accurate control of the "dose" of singlet oxygen used to perturb a cell; a parameter that currently limits mechanistic studies of singlet-oxygen-mediated cell signaling.

  1. Reasons for and costs of hospitalization for pediatric asthma: a prospective 1-year follow-up in a population-based setting.

    PubMed

    Korhonen, K; Reijonen, T M; Remes, K; Malmström, K; Klaukka, T; Korppi, M

    2001-12-01

    The aims of this study were to examine the frequency of, and the reasons for, emergency hospitalization for asthma among children. In addition, the costs of hospital treatment, preventive medication, and productivity losses of the caregivers were evaluated in a population-based setting during 1 year. Data on purchases of regular asthma medication were obtained from the Social Insurance Institution. In total, 106 (2.3/1000) children aged up to 15 years were admitted 136 times for asthma exacerbation to the Kuopio University Hospital in 1998. This represented approximately 5% of all children with asthma in the area. The trigger for the exacerbation was respiratory infection in 63% of the episodes, allergen exposure in 24%, and unknown in 13%. The age-adjusted risk for admittance was 5.3% in children on inhaled steroids, 5.8% in those on cromones, and 7.9% in those with no regular medication for asthma. The mean direct cost for an admission was $1,209 (median $908; range $454-6,812) and the indirect cost was $358 ($316; $253-1,139). The cost of regular medication for asthma was, on average, $272 per admitted child on maintenance. The annual total cost as a result of asthma rose eight-fold if a child on regular medication was admitted for asthma.

  2. Optimization of an AMBER Force Field for the Artificial Nucleic Acid, LNA, and Benchmarking with NMR of L(CAAU)

    PubMed Central

    2013-01-01

    Locked Nucleic Acids (LNAs) are RNA analogues with an O2′-C4′ methylene bridge which locks the sugar into a C3′-endo conformation. This enhances hybridization to DNA and RNA, making LNAs useful in microarrays and potential therapeutics. Here, the LNA, L(CAAU), provides a simplified benchmark for testing the ability of molecular dynamics (MD) to approximate nucleic acid properties. LNA χ torsions and partial charges were parametrized to create AMBER parm99_LNA. The revisions were tested by comparing MD predictions with AMBER parm99 and parm99_LNA against a 200 ms NOESY NMR spectrum of L(CAAU). NMR indicates an A-Form equilibrium ensemble. In 3000 ns simulations starting with an A-form structure, parm99_LNA and parm99 provide 66% and 35% agreement, respectively, with NMR NOE volumes and 3J-couplings. In simulations of L(CAAU) starting with all χ torsions in a syn conformation, only parm99_LNA is able to repair the structure. This implies methods for parametrizing force fields for nucleic acid mimics can reasonably approximate key interactions and that parm99_LNA will improve reliability of MD studies for systems with LNA. A method for approximating χ population distribution on the basis of base to sugar NOEs is also introduced. PMID:24377321

  3. Waves and rays in plano-concave laser cavities: I. Geometric modes in the paraxial approximation

    NASA Astrophysics Data System (ADS)

    Barré, N.; Romanelli, M.; Lebental, M.; Brunel, M.

    2017-05-01

    Eigenmodes of laser cavities are studied theoretically and experimentally in two companion papers, with the aim of making connections between undulatory and geometric properties of light. In this first paper, we focus on macroscopic open-cavity lasers with localized gain. The model is based on the wave equation in the paraxial approximation; experiments are conducted with a simple diode-pumped Nd:YAG laser with a variable cavity length. After recalling fundamentals of laser beam optics, we consider plano-concave cavities with on-axis or off-axis pumping, with emphasis put on degenerate cavity lengths, where modes of different order resonate at the same frequency, and combine to form surprising transverse beam profiles. Degeneracy leads to the oscillation of so-called geometric modes whose properties can be understood, to a certain extent, also within a ray optics picture. We first provide a heuristic description of these modes, based on geometric reasoning, and then show more rigorously how to derive them analytically by building wave superpositions, within the framework of paraxial wave optics. The numerical methods, based on the Fox-Li approach, are described in detail. The experimental setup, including the imaging system, is also detailed and relatively simple to reproduce. The aim is to facilitate implementation of both the numerics and of the experiments, and to show that one can have access not only to the common higher-order modes but also to more exotic patterns.

  4. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Wang, J

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less

  5. Alfven wave cyclotron resonance heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, R.B.; Yosikawa, S.; Oberman, C.

    1981-02-01

    The resonance absorption of fast Alfven waves at the proton ctclotron resonance of a predominately deuterium plasma is investigated. An approximate dispersion relation is derived, valid in the vicinity of the resonance, which permits an exact calculation of transmission and reflection coefficients. For reasonable plasma parameters significant linear resonance absorption is found.

  6. 75 FR 35919 - Investment Company Advertising: Target Date Retirement Fund Names and Marketing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-23

    ... be misleading. The amendments are intended to provide enhanced information to investors concerning... intended as the approximate year of an investor's retirement, and an investor may use the date contained in... manner reasonably calculated to draw investor attention to the information is the same presentation...

  7. Streamlining Your Emissions Inventory Updates

    ERIC Educational Resources Information Center

    Stokes, John

    2011-01-01

    Of the 677 school presidents that have signed on to the American College and University Presidents Climate Commitment (ACUPCC), approximately 200 of them are presidents of community colleges. This measure of involvement at the community college level is promising for two reasons: (1) these schools have emerged as a major provider of public higher…

  8. Clinical Assessment Using the Clinical Rating Scale: Thomas and Olson Revisited.

    ERIC Educational Resources Information Center

    Lee, Robert E.; Jager, Kathleen Burns; Whiting, Jason B.; Kwantes, Catherine T.

    2000-01-01

    Examines whether the Clinical Rating Scale retains its validity when used by psychotherapists in their clinical practice. Confirmatory factor analysis reveals that data provides a reasonable approximation of the underlying factor structure. Concludes that although primarily considered a research instrument, the scale may have a role in clinical…

  9. Why Adolescent Problem Gamblers Do Not Seek Treatment

    ERIC Educational Resources Information Center

    Ladouceur, Robert; Blaszczynski, Alexander; Pelletier, Amelie

    2004-01-01

    Prevalence studies indicate that approximately 40% of adolescents participate in regular gambling with rates of problem gambling up to four times greater than that found in adult populations. However, it appears that few adolescents actually seek treatment for such problems. The purpose of this study was to explore potential reasons why…

  10. A case-based assistant for clinical psychiatry expertise.

    PubMed

    Bichindaritz, I

    1994-01-01

    Case-based reasoning is an artificial intelligence methodology for the processing of empirical knowledge. Recent case-based reasoning systems also use theoretic knowledge about the domain to constrain the case-based reasoning. The organization of the memory is the key issue in case-based reasoning. The case-based assistant presented here has two structures in memory: cases and concepts. These memory structures permit it to be as skilled in problem-solving tasks, such as diagnosis and treatment planning, as in interpretive tasks, such as clinical research. A prototype applied to clinical work about eating disorders in psychiatry, reasoning from the alimentary questionnaires of these patients, is presented as an example of the system abilities.

  11. Effects of Inquiry-Based Agriscience Instruction on Student Scientific Reasoning

    ERIC Educational Resources Information Center

    Thoron, Andrew C.; Myers, Brian E.

    2012-01-01

    The purpose of this study was to determine the effect of inquiry-based agriscience instruction on student scientific reasoning. Scientific reasoning is defined as the use of the scientific method, inductive, and deductive reasoning to develop and test hypothesis. Developing scientific reasoning skills can provide learners with a connection to the…

  12. Integration of Optimal Scheduling with Case-Based Planning.

    DTIC Science & Technology

    1995-08-01

    integrates Case-Based Reasoning (CBR) and Rule-Based Reasoning (RBR) systems. ’ Tachyon : A Constraint-Based Temporal Reasoning Model and Its...Implementation’ provides an overview of the Tachyon temporal’s reasoning system and discusses its possible applications. ’Dual-Use Applications of Tachyon : From...Force Structure Modeling to Manufacturing Scheduling’ discusses the application of Tachyon to real world problems, specifically military force deployment and manufacturing scheduling.

  13. Nut crop yield records show that budbreak-based chilling requirements may not reflect yield decline chill thresholds

    NASA Astrophysics Data System (ADS)

    Pope, Katherine S.; Dose, Volker; Da Silva, David; Brown, Patrick H.; DeJong, Theodore M.

    2015-06-01

    Warming winters due to climate change may critically affect temperate tree species. Insufficiently cold winters are thought to result in fewer viable flower buds and the subsequent development of fewer fruits or nuts, decreasing the yield of an orchard or fecundity of a species. The best existing approximation for a threshold of sufficient cold accumulation, the "chilling requirement" of a species or variety, has been quantified by manipulating or modeling the conditions that result in dormant bud breaking. However, the physiological processes that affect budbreak are not the same as those that determine yield. This study sought to test whether budbreak-based chilling thresholds can reasonably approximate the thresholds that affect yield, particularly regarding the potential impacts of climate change on temperate tree crop yields. County-wide yield records for almond ( Prunus dulcis), pistachio ( Pistacia vera), and walnut ( Juglans regia) in the Central Valley of California were compared with 50 years of weather records. Bayesian nonparametric function estimation was used to model yield potentials at varying amounts of chill accumulation. In almonds, average yields occurred when chill accumulation was close to the budbreak-based chilling requirement. However, in the other two crops, pistachios and walnuts, the best previous estimate of the budbreak-based chilling requirements was 19-32 % higher than the chilling accumulations associated with average or above average yields. This research indicates that physiological processes beyond requirements for budbreak should be considered when estimating chill accumulation thresholds of yield decline and potential impacts of climate change.

  14. Emotional Reasoning and Parent-Based Reasoning in Non-Clinical Children, and Their Prospective Relationships with Anxiety Symptoms

    ERIC Educational Resources Information Center

    Morren, Mattijn; Muris, Peter; Kindt, Merel; Schouten, Erik; van den Hout, Marcel

    2008-01-01

    Emotional and parent-based reasoning refer to the tendency to rely on personal or parental anxiety response information rather than on objective danger information when estimating the dangerousness of a situation. This study investigated the prospective relationships of emotional and parent-based reasoning with anxiety symptoms in a sample of…

  15. Patterns of informal reasoning in the context of socioscientific decision making

    NASA Astrophysics Data System (ADS)

    Sadler, Troy D.; Zeidler, Dana L.

    2005-01-01

    The purpose of this study is to contribute to a theoretical knowledge base through research by examining factors salient to science education reform and practice in the context of socioscientific issues. The study explores how individuals negotiate and resolve genetic engineering dilemmas. A qualitative approach was used to examine patterns of informal reasoning and the role of morality in these processes. Thirty college students participated individually in two semistructured interviews designed to explore their informal reasoning in response to six genetic engineering scenarios. Students demonstrated evidence of rationalistic, emotive, and intuitive forms of informal reasoning. Rationalistic informal reasoning described reason-based considerations; emotive informal reasoning described care-based considerations; and intuitive reasoning described considerations based on immediate reactions to the context of a scenario. Participants frequently relied on combinations of these reasoning patterns as they worked to resolve individual socioscientific scenarios. Most of the participants appreciated at least some of the moral implications of their decisions, and these considerations were typically interwoven within an overall pattern of informal reasoning. These results highlight the need to ensure that science classrooms are environments in which intuition and emotion in addition to reason are valued. Implications and recommendations for future research are discussed.

  16. A new hybrid case-based reasoning approach for medical diagnosis systems.

    PubMed

    Sharaf-El-Deen, Dina A; Moawad, Ibrahim F; Khalifa, M E

    2014-02-01

    Case-Based Reasoning (CBR) has been applied in many different medical applications. Due to the complexities and the diversities of this domain, most medical CBR systems become hybrid. Besides, the case adaptation process in CBR is often a challenging issue as it is traditionally carried out manually by domain experts. In this paper, a new hybrid case-based reasoning approach for medical diagnosis systems is proposed to improve the accuracy of the retrieval-only CBR systems. The approach integrates case-based reasoning and rule-based reasoning, and also applies the adaptation process automatically by exploiting adaptation rules. Both adaptation rules and reasoning rules are generated from the case-base. After solving a new case, the case-base is expanded, and both adaptation and reasoning rules are updated. To evaluate the proposed approach, a prototype was implemented and experimented to diagnose breast cancer and thyroid diseases. The final results show that the proposed approach increases the diagnosing accuracy of the retrieval-only CBR systems, and provides a reliable accuracy comparing to the current breast cancer and thyroid diagnosis systems.

  17. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    NASA Astrophysics Data System (ADS)

    Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.

    2018-01-01

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.

  18. Insights into Gulf of Mexico Gas Hydrate Study Sites GC955 and WR313 from New Multicomponent and High-Resolution 2D Seismic Data

    NASA Astrophysics Data System (ADS)

    Haines, S. S.; Hart, P. E.; Collett, T. S.; Shedd, W. W.; Frye, M.

    2014-12-01

    In 2013, the U.S. Geological Survey led a seismic acquisition expedition in the Gulf of Mexico, acquiring multicomponent data and high-resolution 2D multichannel seismic (MCS) data at Green Canyon 955 (GC955) and Walker Ridge 313 (WR313). Based on previously collected logging-while-drilling (LWD) borehole data, these gas hydrate study sites are known to include high concentrations of gas hydrate within sand layers. At GC955 our new 2D data reveal at least three features that appear to be fluid-flow pathways (chimneys) responsible for gas migration and thus account for some aspects of the gas hydrate distribution observed in the LWD data. Our new data also show that the main gas hydrate target, a Pleistocene channel/levee complex, has an areal extent of approximately 5.5 square kilometers and that a volume of approximately 3 x 107 cubic meters of this body lies within the gas hydrate stability zone. Based on LWD-inferred values and reasonable assumptions for net sand, sand porosity, and gas hydrate saturation, we estimate a total equivalent gas-in-place volume of approximately 8 x 108 cubic meters for the inferred gas hydrate within the channel/levee deposits. At WR313 we are able to map the thin hydrate-bearing sand layers in considerably greater detail than that provided by previous data. We also can map the evolving and migrating channel feature that persists in this area. Together these data and the emerging results provide valuable new insights into the gas hydrate systems at these two sites.

  19. Can we use the equivalent sphere model to approximate organ doses in space radiation environments?

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei

    For space radiation protection one often calculates the dose or dose equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to approximate the BFO dose. However, previous studies have concluded that a 5cm sphere gives a very different dose from the exact BFO dose. One study concludes that a 9cm sphere is a reasonable approximation for the BFO dose in solar particle event (SPE) environments. In this study we investigate the reason behind these observations and extend earlier studies by studying whether BFO, eyes or the skin can be approximated by the equivalent sphere model in different space radiation environments such as solar particle events and galactic cosmic ray (GCR) environments. We take the thickness distribution functions of the organs from the CAM (Computerized Anatomical Man) model, then use a deterministic radiation transport to calculate organ doses in different space radiation environments. The organ doses have been evaluated with a water or aluminum shielding from 0 to 20 g/cm2. We then compare these exact doses with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we propose to use a modified equivalent sphere model with two radius parameters to represent the skin or eyes. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for eyes or the skin. For galactic cosmic rays environments, the equivalent sphere model with one organ-specific radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of eyes or the skin, but is unacceptable for the dose of eyes or the skin. The BFO radius parameters are found to be significantly larger than 5 cm in all cases, consistent with the conclusion of an earlier study. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and 11 cm for the BFO, 3.7 to 4.8 cm for eyes, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose. In the proposed modified equivalent sphere model, the range of each of the two radius parameters for the skin (or eyes) is much tighter than that in the equivalent sphere model with one radius parameter. Our results thus show that the equivalent sphere model works better in galactic cosmic rays environments than in solar particle events. The model works well or marginally well for BFO but usually does not work for eyes or the skin. A modified model with two radius parameters works much better in approximating the dose and dose equivalent in eyes or the skin.

  20. Clinical validation of the General Ability Index--Estimate (GAI-E): estimating premorbid GAI.

    PubMed

    Schoenberg, Mike R; Lange, Rael T; Iverson, Grant L; Chelune, Gordon J; Scott, James G; Adams, Russell L

    2006-09-01

    The clinical utility of the General Ability Index--Estimate (GAI-E; Lange, Schoenberg, Chelune, Scott, & Adams, 2005) for estimating premorbid GAI scores was investigated using the WAIS-III standardization clinical trials sample (The Psychological Corporation, 1997). The GAI-E algorithms combine Vocabulary, Information, Matrix Reasoning, and Picture Completion subtest raw scores with demographic variables to predict GAI. Ten GAI-E algorithms were developed combining demographic variables with single subtest scaled scores and with two subtests. Estimated GAI are presented for participants diagnosed with dementia (n = 50), traumatic brain injury (n = 20), Huntington's disease (n = 15), Korsakoff's disease (n = 12), chronic alcohol abuse (n = 32), temporal lobectomy (n = 17), and schizophrenia (n = 44). In addition, a small sample of participants without dementia and diagnosed with depression (n = 32) was used as a clinical comparison group. The GAI-E algorithms provided estimates of GAI that closely approximated scores expected for a healthy adult population. The greatest differences between estimated GAI and obtained GAI were observed for the single subtest GAI-E algorithms using the Vocabulary, Information, and Matrix Reasoning subtests. Based on these data, recommendations for the use of the GAI-E algorithms are presented.

  1. The Relation between Frequency of E-Cigarette Use and Frequency and Intensity of Cigarette Smoking among South Korean Adolescents

    PubMed Central

    Lee, Jung Ah; Lee, Sungkyu; Cho, Hong-Jun

    2017-01-01

    Introduction: The prevalence of adolescent electronic cigarette (e-cigarette) use has increased in most countries. This study aims to determine the relation between the frequency of e-cigarette use and the frequency and intensity of cigarette smoking. Additionally, the study evaluates the association between the reasons for e-cigarette use and the frequency of its use. Materials and Methods: Using the 2015 Korean Youth Risk Behavior Web-Based Survey, we included 6655 adolescents with an experience of e-cigarette use who were middle and high school students aged 13–18 years. We compared smoking experience, the frequency and intensity of cigarette smoking, and the relation between the reasons for e-cigarette uses and the frequency of e-cigarette use. Results: The prevalence of e-cigarette ever and current (past 30 days) users were 10.1% and 3.9%, respectively. Of the ever users, approximately 60% used e-cigarettes not within 1 month. On the other hand, 8.1% used e-cigarettes daily. The frequent and intensive cigarette smoking was associated with frequent e-cigarette uses. The percentage of frequent e-cigarette users (≥10 days/month) was 3.5% in adolescents who did not smoke within a month, but 28.7% among daily smokers. Additionally, it was 9.1% in smokers who smoked less than 1 cigarette/month, but 55.1% in smokers who smoked ≥20 cigarettes/day. The most common reason for e-cigarette use was curiosity (22.9%), followed by the belief that they are less harmful than conventional cigarettes (18.9%), the desire to quit smoking (13.1%), and the capacity for indoor use (10.7%). Curiosity was the most common reason among less frequent e-cigarette users; however, the desire to quit smoking and the capacity for indoor use were the most common reasons among more frequent users. Conclusions: Results showed a positive relation between frequency or intensity of conventional cigarette smoking and the frequency of e-cigarette use among Korean adolescents, and frequency of e-cigarette use differed according to the reason for the use of e-cigarettes. PMID:28335449

  2. The Relation between Frequency of E-Cigarette Use and Frequency and Intensity of Cigarette Smoking among South Korean Adolescents.

    PubMed

    Lee, Jung Ah; Lee, Sungkyu; Cho, Hong-Jun

    2017-03-14

    The prevalence of adolescent electronic cigarette (e-cigarette) use has increased in most countries. This study aims to determine the relation between the frequency of e-cigarette use and the frequency and intensity of cigarette smoking. Additionally, the study evaluates the association between the reasons for e-cigarette use and the frequency of its use. Using the 2015 Korean Youth Risk Behavior Web-Based Survey, we included 6655 adolescents with an experience of e-cigarette use who were middle and high school students aged 13-18 years. We compared smoking experience, the frequency and intensity of cigarette smoking, and the relation between the reasons for e-cigarette uses and the frequency of e-cigarette use. The prevalence of e-cigarette ever and current (past 30 days) users were 10.1% and 3.9%, respectively. Of the ever users, approximately 60% used e-cigarettes not within 1 month. On the other hand, 8.1% used e-cigarettes daily. The frequent and intensive cigarette smoking was associated with frequent e-cigarette uses. The percentage of frequent e-cigarette users (≥10 days/month) was 3.5% in adolescents who did not smoke within a month, but 28.7% among daily smokers. Additionally, it was 9.1% in smokers who smoked less than 1 cigarette/month, but 55.1% in smokers who smoked ≥20 cigarettes/day. The most common reason for e-cigarette use was curiosity (22.9%), followed by the belief that they are less harmful than conventional cigarettes (18.9%), the desire to quit smoking (13.1%), and the capacity for indoor use (10.7%). Curiosity was the most common reason among less frequent e-cigarette users; however, the desire to quit smoking and the capacity for indoor use were the most common reasons among more frequent users. Results showed a positive relation between frequency or intensity of conventional cigarette smoking and the frequency of e-cigarette use among Korean adolescents, and frequency of e-cigarette use differed according to the reason for the use of e-cigarettes.

  3. Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 2

    NASA Technical Reports Server (NTRS)

    Culbert, Christopher J. (Editor)

    1993-01-01

    Papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake, held 1-3 Jun. 1992 at the Lyndon B. Johnson Space Center in Houston, Texas are included. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making.

  4. Description of waste pretreatment and interfacing systems dynamic simulation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garbrick, D.J.; Zimmerman, B.D.

    1995-05-01

    The Waste Pretreatment and Interfacing Systems Dynamic Simulation Model was created to investigate the required pretreatment facility processing rates for both high level and low level waste so that the vitrification of tank waste can be completed according to the milestones defined in the Tri-Party Agreement (TPA). In order to achieve this objective, the processes upstream and downstream of the pretreatment facilities must also be included. The simulation model starts with retrieval of tank waste and ends with vitrification for both low level and high level wastes. This report describes the results of three simulation cases: one based on suggestedmore » average facility processing rates, one with facility rates determined so that approximately 6 new DSTs are required, and one with facility rates determined so that approximately no new DSTs are required. It appears, based on the simulation results, that reasonable facility processing rates can be selected so that no new DSTs are required by the TWRS program. However, this conclusion must be viewed with respect to the modeling assumptions, described in detail in the report. Also included in the report, in an appendix, are results of two sensitivity cases: one with glass plant water recycle steams recycled versus not recycled, and one employing the TPA SST retrieval schedule versus a more uniform SST retrieval schedule. Both recycling and retrieval schedule appear to have a significant impact on overall tank usage.« less

  5. Utilization of screening mammography in New Hampshire: a population-based assessment.

    PubMed

    Carney, Patricia A; Goodrich, Martha E; Mackenzie, Todd; Weiss, Julia E; Poplack, Steven P; Wells, Wendy S; Titus-Ernstoff, Linda

    2005-10-15

    The objective of screening mammography is to identify breast carcinoma early, which requires routine screening. Although self-report data indicate that screening utilization is high, the results of this population-based assessment indicated that utilization is lower than reported previously. The authors compared New Hampshire population data from the 2000 Census with clinical encounter data for the corresponding time obtained from the New Hampshire Mammography Network, a mammography registry that captures approximately 90% of the mammograms performed in participating New Hampshire facilities. The results showed that approximately 36% of New Hampshire women either never had a mammogram or had not had a mammogram in > 27 months (irregular screenees), and older women (80 yrs and older) were less likely to be screened (79% unscreened/underscreened) compared with younger women (ages 40-69 yrs; 28-32% unscreened/underscreened). Of the screened women, 44% were adhering to an interval of 14 months, and 21% were adhering within 15 months and 26 months. The remaining 35% of the women had 1 or 2 mammograms and did not return within 27 months. Routine mammography screening may be occurring less often than believed when survey data alone are used. An important, compelling concern is the reason women had one or two mammograms only and then did not return for additional screening. This area deserves additional research. Copyright 2005 American Cancer Society

  6. Prevalence and incidence of epilepsy in the Nordic countries.

    PubMed

    Syvertsen, Marte; Koht, Jeanette; Nakken, Karl O

    2015-10-06

    Updated knowledge on the prevalence of epilepsy is valuable for planning of health services to this large and complex patient group. Comprehensive epidemiological research on epilepsy has been undertaken, but because of variations in methodology, the results are difficult to compare. The objective of this article is to present evidence-based estimates of the prevalence and incidence of epilepsy in the Nordic countries. The article is based on a search in PubMed with the search terms epilepsy and epidemiology, combined with each of the Nordic countries separately. Altogether 38 original articles reported incidence and/or prevalence rates of epilepsy in a Nordic country. Four studies had investigated the prevalence of active epilepsy in all age groups, with results ranging from 3.4 to 7.6 per 1,000 inhabitants. Only two studies had investigated the incidence of epilepsy in a prospective material that included all age groups. The reported incidence amounted to 33 and 34 per 100,000 person-years respectively. A prospective study that only included adults reported an incidence of 56 per 100,000 person-years. We estimate that approximately 0.6% of the population of the Nordic countries have active epilepsy, i.e. approximately 30,000 persons in Norway. Epilepsy is thus one of the most common neurological disorders. The incidence data are more uncertain, but we may reasonably assume that 30-60 new cases occur per 100,000 person-years.

  7. Conservative Analytical Collision Probabilities for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2004-01-01

    The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.

  8. Coherent Anomaly Method Calculation on the Cluster Variation Method. II.

    NASA Astrophysics Data System (ADS)

    Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya

    The critical exponents of the bond percolation model are calculated in the D(= 2,3,…)-dimensional simple cubic lattice on the basis of Suzuki's coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.

  9. Elastic scattering of low-energy electrons by nitromethane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopes, A. R.; D'A Sanchez, S.; Bettega, M. H. F.

    2011-06-15

    In this work, we present integral, differential, and momentum transfer cross sections for elastic scattering of low-energy electrons by nitromethane, for energies up to 10 eV. We calculated the cross sections using the Schwinger multichannel method with pseudopotentials, in the static-exchange and in the static-exchange plus polarization approximations. The computed integral cross sections show a {pi}* shape resonance at 0.70 eV in the static-exchange-polarization approximation, which is in reasonable agreement with experimental data. We also found a {sigma}* shape resonance at 4.8 eV in the static-exchange-polarization approximation, which has not been previously characterized by the experiment. We also discuss howmore » these resonances may play a role in the dissociation process of this molecule.« less

  10. Conservative Analytical Collision Probability for Design of Orbital Formations

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2004-01-01

    The literature offers a number of approximations for analytically and/or efficiently computing the probability of collision between two space objects. However, only one of these techniques is a completely analytical approximation that is suitable for use in the preliminary design phase, when it is more important to quickly analyze a large segment of the trade space than it is to precisely compute collision probabilities. Unfortunately, among the types of formations that one might consider, some combine a range of conditions for which this analytical method is less suitable. This work proposes a simple, conservative approximation that produces reasonable upper bounds on the collision probability in such conditions. Although its estimates are much too conservative under other conditions, such conditions are typically well suited for use of the existing method.

  11. Investigating Students' Reasoning about Acid-Base Reactions

    ERIC Educational Resources Information Center

    Cooper, Melanie M.; Kouyoumdjian, Hovig; Underwood, Sonia M.

    2016-01-01

    Acid-base chemistry is central to a wide range of reactions. If students are able to understand how and why acid-base reactions occur, it should provide a basis for reasoning about a host of other reactions. Here, we report the development of a method to characterize student reasoning about acid-base reactions based on their description of…

  12. Case-based reasoning in design: An apologia

    NASA Technical Reports Server (NTRS)

    Pulaski, Kirt

    1990-01-01

    Three positions are presented and defended: the process of generating solutions in problem solving is viewable as a design task; case-based reasoning is a strong method of problem solving; and a synergism exists between case-based reasoning and design problem solving.

  13. A Framework for Assessing High School Students' Statistical Reasoning.

    PubMed

    Chan, Shiau Wei; Ismail, Zaleha; Sumintono, Bambang

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.

  14. A Framework for Assessing High School Students' Statistical Reasoning

    PubMed Central

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students’ statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students’ statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework’s cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments. PMID:27812091

  15. A Geant4 evaluation of the Hornyak button and two candidate detectors for the TREAT hodoscope

    NASA Astrophysics Data System (ADS)

    Fu, Wenkai; Ghosh, Priyarshini; Harrison, Mark J.; McGregor, Douglas S.; Roberts, Jeremy A.

    2018-05-01

    The performance of traditional Hornyak buttons and two proposed variants for fast-neutron hodoscope applications was evaluated using Geant4. The Hornyak button is a ZnS(Ag)-based device previously deployed at the Idaho National Laboratory's TRansient REActor Test Facility (better known as TREAT) for monitoring fast neutrons emitted during pulsing of fissile fuel samples. Past use of these devices relied on pulse-shape discrimination to reduce the significant levels of background Cherenkov radiation. Proposed are two simple designs that reduce the overall light guide mass (here, polymethyl methacrylate or PMMA), employ silicon photomultipliers (SiPMs), and can be operated using pulse-height discrimination alone to eliminate background noise to acceptable levels. Geant4 was first used to model a traditional Hornyak button, and for assumed, hodoscope-like conditions, an intrinsic efficiency of 0.35% for mono-directional fission neutrons was predicted. The predicted efficiency is in reasonably good agreement with experimental data from the literature and, hence, served to validate the physics models and approximations employed. Geant4 models were then developed to optimize the materials and geometries of two alternatives to the Hornyak button, one based on a homogeneous mixture of ZnS(Ag) and PMMA, and one based on alternating layers of ZnS(Ag) and PMMA oriented perpendicular to the incident neutron beam. For the same radiation environment, optimized, 5-cm long (along the beam path) devices of the homogeneous and layered designs were predicted to have efficiencies of approximately 1.3% and 3.3%, respectively. For longer devices, i.e., lengths larger than 25 cm, these efficiencies were shown to peak at approximately 2.2% and 5.9%, respectively. Moreover, both designs were shown to discriminate Cherenkov noise intrinsically by using an appropriate pulse-height discriminator level, i.e., pulse-shape discrimination is not needed for these devices.

  16. A Geant4 evaluation of the Hornyak button and two candidate detectors for the TREAT hodoscope

    DOE PAGES

    Fu, Wenkai; Ghosh, Priyarshini; Harrison, Mark; ...

    2018-02-05

    The performance of traditional Hornyak buttons and two proposed variants for fast-neutron hodoscope applications was evaluated using Geant4. The Hornyak button is a ZnS(Ag)-based device previously deployed at the Idaho National Laboratory's TRansient REActor Test Facility (better known as TREAT) for monitoring fast neutrons emitted during pulsing of fissile fuel samples. Past use of these devices relied on pulse-shape discrimination to reduce the significant levels of background Cherenkov radiation. Proposed are two simple designs that reduce the overall light guide mass (here, polymethyl methacrylate or PMMA), employ silicon photomultipliers (SiPMs), and can be operated using pulse-height discrimination alone to eliminatemore » background noise to acceptable levels. Geant4 was first used to model a traditional Hornyak button, and for assumed, hodoscope-like conditions, an intrinsic efficiency of 0.35% for mono-directional fission neutrons was predicted. The predicted efficiency is in reasonably good agreement with experimental data from the literature and, hence, served to validate the physics models and approximations employed. Geant4 models were then developed to optimize the materials and geometries of two alternatives to the Hornyak button, one based on a homogeneous mixture of ZnS(Ag) and PMMA, and one based on alternating layers of ZnS(Ag) and PMMA oriented perpendicular to the incident neutron beam. For the same radiation environment, optimized, 5-cm long (along the beam path) devices of the homogeneous and layered designs were predicted to have efficiencies of approximately 1.3% and 3.3%, respectively. For longer devices, i.e., lengths larger than 25 cm, these efficiencies were shown to peak at approximately 2.2% and 5.9%, respectively. Furthermore, both designs were shown to discriminate Cherenkov noise intrinsically by using an appropriate pulse-height discriminator level, i.e., pulse-shape discrimination is not needed for these devices.« less

  17. Recommendations for the clinical management of children with refractory epilepsy receiving the ketogenic diet.

    PubMed

    Alberti, María J; Agustinho, Ariela; Argumedo, Laura; Armeno, Marisa; Blanco, Virginia; Bouquet, Cecilia; Cabrera, Analía; Caraballo, Roberto; Caramuta, Luciana; Cresta, Araceli; de Grandis, Elizabeth S; De Martini, Martha G; Diez, Cecilia; Dlugoszewski, Corina; Escobal, Nidia; Ferrero, Hilario; Galicchio, Santiago; Gambarini, Victoria; Gamboni, Beatriz; Guisande, Silvina; Hassan, Amal; Matarrese, Pablo; Mestre, Graciela; Pesce, Laura; Ríos, Viviana; Sosa, Patricia; Vaccarezza, María; Viollaz, Rocío; Panico, Luis

    2016-02-01

    The ketogenic diet, a non-drug treatment with proven effectiveness, has been the most commonly used therapy in the past decade for the management of refractory epilepsy in the pediatric population. Compared to adding a new drug to a pre-existing treatment, the ketogenic diet is highly effective and reduces the number of seizures by 50-90% in approximately 45-60% of children after six months of treatment. For this reason, the Argentine Society of Pediatric Neurology established the Ketogenic Diet Working Group. It is integrated by pediatric dietitians, pediatricians, pediatric neurologists and B.S. in Nutrition, who developed recommendations for the optimal management of patients receiving the classical ketogenic diet based on expert consensus and scientific publications in this field. Sociedad Argentina de Pediatría.

  18. Sum-rule corrections: a route to error cancellations in correlation matrix renormalisation theory

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, J.; Yao, Y. X.; Wang, C. Z.; Ho, K. M.

    2017-03-01

    We recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a more accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.

  19. Topology-based description of the NCA cathode configurational space and an approach of its effective reduction

    NASA Astrophysics Data System (ADS)

    Zolotarev, Pavel; Eremin, Roman

    2018-04-01

    Modification of existing solid electrolyte and cathode materialsis a topic of interest for theoreticians and experimentalists. In particular, itrequires elucidation of the influence of dopants on the characteristics of thestudying materials. For the reason of high complexity of theconfigurational space of doped/deintercalated systems, application of thecomputer modeling approaches is hindered, despite significant advances ofcomputational facilities in last decades. In this study, we propose a scheme,which allows to reduce a set of structures of a modeled configurationalspace for the subsequent study by means of the time-consuming quantumchemistry methods. Application of the proposed approach is exemplifiedthrough the study of the configurational space of the commercialLiNi0.8Co0.15Al0.05O2 (NCA) cathode material approximant.

  20. Evaluation of a vortex-based subgrid stress model using DNS databases

    NASA Technical Reports Server (NTRS)

    Misra, Ashish; Lund, Thomas S.

    1996-01-01

    The performance of a SubGrid Stress (SGS) model for Large-Eddy Simulation (LES) developed by Misra k Pullin (1996) is studied for forced and decaying isotropic turbulence on a 32(exp 3) grid. The physical viability of the model assumptions are tested using DNS databases. The results from LES of forced turbulence at Taylor Reynolds number R(sub (lambda)) approximately equals 90 are compared with filtered DNS fields. Probability density functions (pdfs) of the subgrid energy transfer, total dissipation, and the stretch of the subgrid vorticity by the resolved velocity-gradient tensor show reasonable agreement with the DNS data. The model is also tested in LES of decaying isotropic turbulence where it correctly predicts the decay rate and energy spectra measured by Comte-Bellot & Corrsin (1971).

  1. Vapor-liquid equilibrium and equation of state of two-dimensional fluids from a discrete perturbation theory

    NASA Astrophysics Data System (ADS)

    Trejos, Víctor M.; Santos, Andrés; Gámez, Francisco

    2018-05-01

    The interest in the description of the properties of fluids of restricted dimensionality is growing for theoretical and practical reasons. In this work, we have firstly developed an analytical expression for the Helmholtz free energy of the two-dimensional square-well fluid in the Barker-Henderson framework. This equation of state is based on an approximate analytical radial distribution function for d-dimensional hard-sphere fluids (1 ≤ d ≤ 3) and is validated against existing and new simulation results. The so-obtained equation of state is implemented in a discrete perturbation theory able to account for general potential shapes. The prototypical Lennard-Jones and Yukawa fluids are tested in its two-dimensional version against available and new simulation data with semiquantitative agreement.

  2. Vector processing efficiency of plasma MHD codes by use of the FACOM 230-75 APU

    NASA Astrophysics Data System (ADS)

    Matsuura, T.; Tanaka, Y.; Naraoka, K.; Takizuka, T.; Tsunematsu, T.; Tokuda, S.; Azumi, M.; Kurita, G.; Takeda, T.

    1982-06-01

    In the framework of pipelined vector architecture, the efficiency of vector processing is assessed with respect to plasma MHD codes in nuclear fusion research. By using a vector processor, the FACOM 230-75 APU, the limit of the enhancement factor due to parallelism of current vector machines is examined for three numerical codes based on a fluid model. Reasonable speed-up factors of approximately 6,6 and 4 times faster than the highly optimized scalar version are obtained for ERATO (linear stability code), AEOLUS-R1 (nonlinear stability code) and APOLLO (1-1/2D transport code), respectively. Problems of the pipelined vector processors are discussed from the viewpoint of restructuring, optimization and choice of algorithms. In conclusion, the important concept of "concurrency within pipelined parallelism" is emphasized.

  3. Hybrids of Nucleic Acids and Carbon Nanotubes for Nanobiotechnology.

    PubMed

    Umemura, Kazuo

    2015-03-12

    Recent progress in the combination of nucleic acids and carbon nanotubes (CNTs) has been briefly reviewed here. Since discovering the hybridization phenomenon of DNA molecules and CNTs in 2003, a large amount of fundamental and applied research has been carried out. Among thousands of papers published since 2003, approximately 240 papers focused on biological applications were selected and categorized based on the types of nucleic acids used, but not the types of CNTs. This survey revealed that the hybridization phenomenon is strongly affected by various factors, such as DNA sequences, and for this reason, fundamental studies on the hybridization phenomenon are important. Additionally, many research groups have proposed numerous practical applications, such as nanobiosensors. The goal of this review is to provide perspective on biological applications using hybrids of nucleic acids and CNTs.

  4. 9 CFR 94.8 - Pork and pork products from regions where African swine fever exists or is reasonably believed to...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... maintained on the APHIS Web site at http://www.aphis.usda.gov/import_export/animals/animal_disease_status... approximately 210 minutes after which they must be cooked in hot oil (deep-fried) at a minimum of 104 °C for an...

  5. Learning to Leverage Student Thinking: What Novice Approximations Teach Us about Ambitious Practice

    ERIC Educational Resources Information Center

    Singer-Gabella, Marcy; Stengel, Barbara; Shahan, Emily; Kim, Min-Joung

    2016-01-01

    Central to ambitious teaching is a constellation of practices we have come to call "leveraging student thinking." In leveraging, teachers position students' understanding and reasoning as a central means to drive learning forward. While leveraging typically is described as a feature of mature practice, in this article we examine…

  6. 75 FR 52375 - Dominion Energy Kewaunee, Inc. Kewaunee Power Station; Notice of Availability of the Final...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-25

    ..., approximately 27 miles east-southeast of Green Bay, WI. Possible alternatives to the proposed action (license renewal) include no action and reasonable alternative energy sources. As discussed in Section 9.4 of the... NUCLEAR REGULATORY COMMISSION [Docket No. 50-305; NRC-2010-0041] Dominion Energy Kewaunee, Inc...

  7. A Proposed Template for an Emergency Online School Professional Training Curriculum

    ERIC Educational Resources Information Center

    Rush, S. Craig; Wheeler, Joanna; Partridge, Ashley

    2014-01-01

    On average, natural disasters directly impact approximately 160 million individuals and cause 90,000 deaths each year. As natural disasters are becoming more familiar, it stands to reason that school personnel, particularly mental health professionals, need to know how to prepare for natural disasters. Current disaster preparation and response…

  8. Predictors of Graduation of Readmitted "At Risk" College Students

    ERIC Educational Resources Information Center

    Berkovitz, Roslyn A.; O'Quin, Karen

    2007-01-01

    We conducted an archival study of at-risk students who had "stopped out" of college for many reasons (academic dismissal, financial problems, personal problems, etc.) and who later were accepted to return to school. Approximately 27% of the accepted students chose not to return. Those who returned had higher grade point averages, had completed…

  9. A Compulsory Bioethics Module for a Large Final Year Undergraduate Class

    ERIC Educational Resources Information Center

    Pearce, Roger S.

    2009-01-01

    The article describes a compulsory bioethics module delivered to [approximately] 120 biology students in their final year. The main intended learning outcome is that students should be able to analyse and reason about bioethical issues. Interactive lectures explain and illustrate bioethics. Underlying principles and example issues are used to…

  10. Beyond "Export Education": Aspiring to Put Students at the Heart of a University's Internationalisation Strategy

    ERIC Educational Resources Information Center

    Healey, Nigel M.

    2017-01-01

    For many universities around the world, internationalisation means the recruitment of fee-paying international students (so-called export education) for primarily commercial reasons. For UK universities, international (non-European Union) students account for approximately 13% of their annual revenues, making them highly dependent on international…

  11. Metals Emissions from the Open Detonation Treatment of Energetic Wastes

    DTIC Science & Technology

    2004-10-01

    CPIA Publication 477, Vol. I, March 1988. p. 139. 12. Naval Air Warfare Center Weapons Division. "Fragment Breakup Testing of BLU-97 Bomblets with PBXN ...volume at the time the particulate sample was collected was approximately 106 m3. For unknown reasons, the Army did not convert the detonation plume

  12. Free Fall and the Equivalence Principle Revisited

    ERIC Educational Resources Information Center

    Pendrill, Ann-Marie

    2017-01-01

    Free fall is commonly discussed as an example of the equivalence principle, in the context of a homogeneous gravitational field, which is a reasonable approximation for small test masses falling moderate distances. Newton's law of gravity provides a generalisation to larger distances, and also brings in an inhomogeneity in the gravitational field.…

  13. An Annotated Bibliography of Some Recent Articles That Correlate with the Sewall Early Education Developmental Program (SEED).

    ERIC Educational Resources Information Center

    Jackson, Janice; Flamboe, Thomas C.

    The annotated bibliography contains approximately 110 references (1969-1976) of articles related to the Sewall Early Education Developmental Program. Entries are arranged alphabetically by author within the following seven topic areas: social emotional, gross motor, fine motor, adaptive reasoning, speech and language, feeding and dressing and…

  14. Teachers See What Ability Scores Cannot: Predicting Student Performance with Challenging Mathematics

    ERIC Educational Resources Information Center

    Foreman, Jennifer L.; Gubbins, E. Jean

    2015-01-01

    Teacher nominations of students are commonly used in gifted and talented identification systems to supplement psychometric measures of reasoning ability. In this study, second grade teachers were requested to nominate approximately one fourth of their students as having high learning potential in the year prior to the students' participation in a…

  15. A Study of Vocational Education Programs in the Michigan Department of Corrections.

    ERIC Educational Resources Information Center

    Dirkx, John M.; Kielbaso, Gloria; Corley, Charles

    Rapid expansion of the prison population in Michigan has created concern for consistency, continuity, and articulation within the Michigan Department of Corrections vocational programs, which serve approximately 1,800 prisoners at a time. For this reason, a study was undertaken to determine how vocational education within Michigan's prisons might…

  16. Females' Reasons for Their Physical Aggression in Dating Relationships

    ERIC Educational Resources Information Center

    Hettrich, Emma L.; O'Leary, K. Daniel

    2007-01-01

    Approximately 32% of dating college females reported that they engaged in physical aggression against their partners and that they engaged in acts of physical aggression more often than their male partners engaged in aggression against them. However, the females also reported that their male partners attempted to force them to engage in oral sex…

  17. Heuristic analogy in Ars Conjectandi: From Archimedes' De Circuli Dimensione to Bernoulli's theorem.

    PubMed

    Campos, Daniel G

    2018-02-01

    This article investigates the way in which Jacob Bernoulli proved the main mathematical theorem that undergirds his art of conjecturing-the theorem that founded, historically, the field of mathematical probability. It aims to contribute a perspective into the question of problem-solving methods in mathematics while also contributing to the comprehension of the historical development of mathematical probability. It argues that Bernoulli proved his theorem by a process of mathematical experimentation in which the central heuristic strategy was analogy. In this context, the analogy functioned as an experimental hypothesis. The article expounds, first, Bernoulli's reasoning for proving his theorem, describing it as a process of experimentation in which hypothesis-making is crucial. Next, it investigates the analogy between his reasoning and Archimedes' approximation of the value of π, by clarifying both Archimedes' own experimental approach to the said approximation and its heuristic influence on Bernoulli's problem-solving strategy. The discussion includes some general considerations about analogy as a heuristic technique to make experimental hypotheses in mathematics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Organizational, Cultural, and Psychological Determinants of Smart Infusion Pump Work Arounds: A Study of 3 U.S. Health Systems.

    PubMed

    Dunford, Benjamin B; Perrigino, Matthew; Tucker, Sharon J; Gaston, Cynthia L; Young, Jim; Vermace, Beverly J; Walroth, Todd A; Buening, Natalie R; Skillman, Katherine L; Berndt, Dawn

    2017-09-01

    We investigated nurse perceptions of smart infusion medication pumps to provide evidence-based insights on how to help reduce work around and improve compliance with patient safety policies. Specifically, we investigated the following 3 research questions: (1) What are nurses' current attitudes about smart infusion pumps? (2) What do nurses think are the causes of smart infusion pump work arounds? and (3) To whom do nurses turn for smart infusion pump training and troubleshooting? We surveyed a large number of nurses (N = 818) in 3 U.S.-based health care systems to address the research questions above. We assessed nurses' opinions about smart infusion pumps, organizational perceptions, and the reasons for work arounds using a voluntary and anonymous Web-based survey. Using qualitative research methods, we coded open-ended responses to questions about the reasons for work arounds to organize responses into useful categories. The nurses reported widespread satisfaction with smart infusion pumps. However, they reported numerous organizational, cultural, and psychological causes of smart pump work arounds. Of 1029 open-ended responses to the question "why do smart pump work arounds occur?" approximately 44% of the causes were technology related, 47% were organization related, and 9% were related to individual factors. Finally, an overwhelming majority of nurses reported seeking solutions to smart pump problems from coworkers and being trained primarily on the job. Hospitals may significantly improve adherence to smart pump safety features by addressing the nontechnical causes of work arounds and by providing more leadership and formalized training for resolving smart pump-related problems.

  19. Case-based reasoning: The marriage of knowledge base and data base

    NASA Technical Reports Server (NTRS)

    Pulaski, Kirt; Casadaban, Cyprian

    1988-01-01

    The coupling of data and knowledge has a synergistic effect when building an intelligent data base. The goal is to integrate the data and knowledge almost to the point of indistinguishability, permitting them to be used interchangeably. Examples given in this paper suggest that Case-Based Reasoning is a more integrated way to link data and knowledge than pure rule-based reasoning.

  20. Design of Composite Structures Using Knowledge-Based and Case Based Reasoning

    NASA Technical Reports Server (NTRS)

    Lambright, Jonathan Paul

    1996-01-01

    A method of using knowledge based and case based reasoning to assist designers during conceptual design tasks of composite structures was proposed. The cooperative use of heuristics, procedural knowledge, and previous similar design cases suggests a potential reduction in design cycle time and ultimately product lead time. The hypothesis of this work is that the design process of composite structures can be improved by using Case-Based Reasoning (CBR) and Knowledge-Based (KB) reasoning in the early design stages. The technique of using knowledge-based and case-based reasoning facilitates the gathering of disparate information into one location that is easily and readily available. The method suggests that the inclusion of downstream life-cycle issues into the conceptual design phase reduces potential of defective, and sub-optimal composite structures. Three industry experts were interviewed extensively. The experts provided design rules, previous design cases, and test problems. A Knowledge Based Reasoning system was developed using the CLIPS (C Language Interpretive Procedural System) environment and a Case Based Reasoning System was developed using the Design Memory Utility For Sharing Experiences (MUSE) xviii environment. A Design Characteristic State (DCS) was used to document the design specifications, constraints, and problem areas using attribute-value pair relationships. The DCS provided consistent design information between the knowledge base and case base. Results indicated that the use of knowledge based and case based reasoning provided a robust design environment for composite structures. The knowledge base provided design guidance from well defined rules and procedural knowledge. The case base provided suggestions on design and manufacturing techniques based on previous similar designs and warnings of potential problems and pitfalls. The case base complemented the knowledge base and extended the problem solving capability beyond the existence of limited well defined rules. The findings indicated that the technique is most effective when used as a design aid and not as a tool to totally automate the composites design process. Other areas of application and implications for future research are discussed.

  1. Rates and Reasons for Early Change of First HAART in HIV-1-Infected Patients in 7 Sites throughout the Caribbean and Latin America

    PubMed Central

    Cesar, Carina; Shepherd, Bryan E.; Krolewiecki, Alejandro J.; Fink, Valeria I.; Schechter, Mauro; Tuboi, Suely H.; Wolff, Marcelo; Pape, Jean W.; Leger, Paul; Padgett, Denis; Madero, Juan Sierra; Gotuzzo, Eduardo; Sued, Omar; McGowan, Catherine C.; Masys, Daniel R.; Cahn, Pedro E.

    2010-01-01

    Background HAART rollout in Latin America and the Caribbean has increased from approximately 210,000 in 2003 to 390,000 patients in 2007, covering 62% (51%–70%) of eligible patients, with considerable variation among countries. No multi-cohort study has examined rates of and reasons for change of initial HAART in this region. Methodology Antiretroviral-naïve patients > = 18 years who started HAART between 1996 and 2007 and had at least one follow-up visit from sites in Argentina, Brazil, Chile, Haiti, Honduras, Mexico and Peru were included. Time from HAART initiation to change (stopping or switching any antiretrovirals) was estimated using Kaplan-Meier techniques. Cox proportional hazards modeled the associations between change and demographics, initial regimen, baseline CD4 count, and clinical stage. Principal Findings Of 5026 HIV-infected patients, 35% were female, median age at HAART initiation was 37 years (interquartile range [IQR], 31–44), and median CD4 count was 105 cells/uL (IQR, 38–200). Estimated probabilities of changing within 3 months and one year of HAART initiation were 16% (95% confidence interval (CI) 15–17%) and 28% (95% CI 27–29%), respectively. Efavirenz-based regimens and no clinical AIDS at HAART initiation were associated with lower risk of change (hazard ratio (HR) = 1.7 (95% CI 1.1–2.6) and 2.1 (95% CI 1.7–2.5) comparing neverapine-based regimens and other regimens to efavirenz, respectively; HR = 1.3 (95% CI 1.1–1.5) for clinical AIDS at HAART initiation). The primary reason for change among HAART initiators were adverse events (14%), death (5.7%) and failure (1.3%) with specific toxicities varying among sites. After change, most patients remained in first line regimens. Conclusions Adverse events were the leading cause for changing initial HAART. Predictors for change due to any reason were AIDS at baseline and the use of a non-efavirenz containing regimen. Differences between participant sites were observed and require further investigation. PMID:20531956

  2. The effects of reasons given for ineligibility on perceived gender discrimination and feelings of injustice.

    PubMed

    Kappen, D M; Branscombe, N R

    2001-06-01

    We examine whether the reason given for a negative outcome influences the likelihood of making gender discrimination attributions. Men and women were given one of four reasons for their ineligibility to attend an event: an explicit gender reason, a reason based on an attribute correlated with gender, that same gender-related reason with explanatory information attached, or they were given no reason. Providing participants with a reason based on a gender-related attribute deflected them from making attributions to gender discrimination, indicating that discrimination attributions can easily be averted. Adding explanatory information to the gender-related reason decreased feelings of injustice, illegitimacy and anger while increasing acceptance of the outcome.

  3. The Evidence-Based Reasoning Framework: Assessing Scientific Reasoning

    ERIC Educational Resources Information Center

    Brown, Nathaniel J. S.; Furtak, Erin Marie; Timms, Michael; Nagashima, Sam O.; Wilson, Mark

    2010-01-01

    Recent science education reforms have emphasized the importance of students engaging with and reasoning from evidence to develop scientific explanations. A number of studies have created frameworks based on Toulmin's (1958/2003) argument pattern, whereas others have developed systems for assessing the quality of students' reasoning to support…

  4. Reasons for low aerodynamic performance of 13.5-centimeter-tip-diameter aircraft engine starter turbine

    NASA Technical Reports Server (NTRS)

    Haas, J. E.; Roelke, R. J.; Hermann, P.

    1981-01-01

    The reasons for the low aerodynamic performance of a 13.5 cm tip diameter aircraft engine starter turbine were investigated. Both the stator and the stage were evaluated. Approximately 10 percent improvement in turbine efficiency was obtained when the honeycomb shroud over the rotor blade tips was filled to obtain a solid shroud surface. Efficiency improvements were obtained for three rotor configurations when the shroud was filled. It is suggested that the large loss associated with the open honeycomb shroud is due primarily to energy loss associated with gas transportation as a result of the blade to blade pressure differential at the tip section.

  5. Student Interpretations of Phylogenetic Trees in an Introductory Biology Course

    PubMed Central

    Dees, Jonathan; Niemi, Jarad; Montplaisir, Lisa

    2014-01-01

    Phylogenetic trees are widely used visual representations in the biological sciences and the most important visual representations in evolutionary biology. Therefore, phylogenetic trees have also become an important component of biology education. We sought to characterize reasoning used by introductory biology students in interpreting taxa relatedness on phylogenetic trees, to measure the prevalence of correct taxa-relatedness interpretations, and to determine how student reasoning and correctness change in response to instruction and over time. Counting synapomorphies and nodes between taxa were the most common forms of incorrect reasoning, which presents a pedagogical dilemma concerning labeled synapomorphies on phylogenetic trees. Students also independently generated an alternative form of correct reasoning using monophyletic groups, the use of which decreased in popularity over time. Approximately half of all students were able to correctly interpret taxa relatedness on phylogenetic trees, and many memorized correct reasoning without understanding its application. Broad initial instruction that allowed students to generate inferences on their own contributed very little to phylogenetic tree understanding, while targeted instruction on evolutionary relationships improved understanding to some extent. Phylogenetic trees, which can directly affect student understanding of evolution, appear to offer introductory biology instructors a formidable pedagogical challenge. PMID:25452489

  6. Meta-regression approximations to reduce publication selection bias.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2014-03-01

    Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Macroscopic and microscopic components of exchange-correlation interactions

    NASA Astrophysics Data System (ADS)

    Sottile, F.; Karlsson, K.; Reining, L.; Aryasetiawan, F.

    2003-11-01

    We consider two commonly used approaches for the ab initio calculation of optical-absorption spectra, namely, many-body perturbation theory based on Green’s functions and time-dependent density-functional theory (TDDFT). The former leads to the two-particle Bethe-Salpeter equation that contains a screened electron-hole interaction. We approximate this interaction in various ways, and discuss in particular the results obtained for a local contact potential. This, in fact, allows us to straightforwardly make the link to the TDDFT approach, and to discuss the exchange-correlation kernel fxc that corresponds to the contact exciton. Our main results, illustrated in the examples of bulk silicon, GaAs, argon, and LiF, are the following. (i) The simple contact exciton model, used on top of an ab initio calculated band structure, yields reasonable absorption spectra. (ii) Qualitatively extremely different fxc can be derived approximatively from the same Bethe-Salpeter equation. These kernels can however yield very similar spectra. (iii) A static fxc, both with or without a long-range component, can create transitions in the quasiparticle gap. To the best of our knowledge, this is the first time that TDDFT has been shown to be able to reproduce bound excitons.

  8. Data fitting and image fine-tuning approach to solve the inverse problem in fluorescence molecular imaging

    NASA Astrophysics Data System (ADS)

    Gorpas, Dimitris; Politopoulos, Kostas; Yova, Dido; Andersson-Engels, Stefan

    2008-02-01

    One of the most challenging problems in medical imaging is to "see" a tumour embedded into tissue, which is a turbid medium, by using fluorescent probes for tumour labeling. This problem, despite the efforts made during the last years, has not been fully encountered yet, due to the non-linear nature of the inverse problem and the convergence failures of many optimization techniques. This paper describes a robust solution of the inverse problem, based on data fitting and image fine-tuning techniques. As a forward solver the coupled radiative transfer equation and diffusion approximation model is proposed and compromised via a finite element method, enhanced with adaptive multi-grids for faster and more accurate convergence. A database is constructed by application of the forward model on virtual tumours with known geometry, and thus fluorophore distribution, embedded into simulated tissues. The fitting procedure produces the best matching between the real and virtual data, and thus provides the initial estimation of the fluorophore distribution. Using this information, the coupled radiative transfer equation and diffusion approximation model has the required initial values for a computational reasonable and successful convergence during the image fine-tuning application.

  9. Features in the Behavior of the Solar Wind behind the Bow Shock Front near the Boundary of the Earth's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Grib, S. A.; Leora, S. N.

    2017-12-01

    Macroscopic discontinuous structures observed in the solar wind are considered in the framework of magnetic hydrodynamics. The interaction of strong discontinuities is studied based on the solution of the generalized Riemann-Kochin problem. The appearance of discontinuities inside the magnetosheath after the collision of the solar wind shock wave with the bow shock front is taken into account. The propagation of secondary waves appearing in the magnetosheath is considered in the approximation of one-dimensional ideal magnetohydrodynamics. The appearance of a compression wave reflected from the magnetopause is indicated. The wave can nonlinearly break with the formation of a backward shock wave and cause the motion of the bow shock towards the Sun. The interaction between shock waves is considered with the well-known trial calculation method. It is assumed that the velocity of discontinuities in the magnetosheath in the first approximation is constant on the average. All reasonings and calculations correspond to consideration of a flow region with a velocity less than the magnetosonic speed near the Earth-Sun line. It is indicated that the results agree with the data from observations carried out on the WIND and Cluster spacecrafts.

  10. Rocksalt or cesium chloride: Investigating the relative stability of the cesium halide structures with random phase approximation based methods

    NASA Astrophysics Data System (ADS)

    Nepal, Niraj K.; Ruzsinszky, Adrienn; Bates, Jefferson E.

    2018-03-01

    The ground state structural and energetic properties for rocksalt and cesium chloride phases of the cesium halides were explored using the random phase approximation (RPA) and beyond-RPA methods to benchmark the nonempirical SCAN meta-GGA and its empirical dispersion corrections. The importance of nonadditivity and higher-order multipole moments of dispersion in these systems is discussed. RPA generally predicts the equilibrium volume for these halides within 2.4% of the experimental value, while beyond-RPA methods utilizing the renormalized adiabatic LDA (rALDA) exchange-correlation kernel are typically within 1.8%. The zero-point vibrational energy is small and shows that the stability of these halides is purely due to electronic correlation effects. The rAPBE kernel as a correction to RPA overestimates the equilibrium volume and could not predict the correct phase ordering in the case of cesium chloride, while the rALDA kernel consistently predicted results in agreement with the experiment for all of the halides. However, due to its reasonable accuracy with lower computational cost, SCAN+rVV10 proved to be a good alternative to the RPA-like methods for describing the properties of these ionic solids.

  11. A combinatorial approach to protein docking with flexible side chains.

    PubMed

    Althaus, Ernst; Kohlbacher, Oliver; Lenhof, Hans-Peter; Müller, Peter

    2002-01-01

    Rigid-body docking approaches are not sufficient to predict the structure of a protein complex from the unbound (native) structures of the two proteins. Accounting for side chain flexibility is an important step towards fully flexible protein docking. This work describes an approach that allows conformational flexibility for the side chains while keeping the protein backbone rigid. Starting from candidates created by a rigid-docking algorithm, we demangle the side chains of the docking site, thus creating reasonable approximations of the true complex structure. These structures are ranked with respect to the binding free energy. We present two new techniques for side chain demangling. Both approaches are based on a discrete representation of the side chain conformational space by the use of a rotamer library. This leads to a combinatorial optimization problem. For the solution of this problem, we propose a fast heuristic approach and an exact, albeit slower, method that uses branch-and-cut techniques. As a test set, we use the unbound structures of three proteases and the corresponding protein inhibitors. For each of the examples, the highest-ranking conformation produced was a good approximation of the true complex structure.

  12. Electromagnetic launch of lunar material

    NASA Technical Reports Server (NTRS)

    Snow, William R.; Kolm, Henry H.

    1992-01-01

    Lunar soil can become a source of relatively inexpensive oxygen propellant for vehicles going from low Earth orbit (LEO) to geosynchronous Earth orbit (GEO) and beyond. This lunar oxygen could replace the oxygen propellant that, in current plans for these missions, is launched from the Earth's surface and amounts to approximately 75 percent of the total mass. The reason for considering the use of oxygen produced on the Moon is that the cost for the energy needed to transport things from the lunar surface to LEO is approximately 5 percent the cost from the surface of the Earth to LEO. Electromagnetic launchers, in particular the superconducting quenchgun, provide a method of getting this lunar oxygen off the lunar surface at minimal cost. This cost savings comes from the fact that the superconducting quenchgun gets its launch energy from locally supplied, solar- or nuclear-generated electrical power. We present a preliminary design to show the main features and components of a lunar-based superconducting quenchgun for use in launching 1-ton containers of liquid oxygen, one every 2 hours. At this rate, nearly 4400 tons of liquid oxygen would be launched into low lunar orbit in a year.

  13. SOLAR WAVE-FIELD SIMULATION FOR TESTING PROSPECTS OF HELIOSEISMIC MEASUREMENTS OF DEEP MERIDIONAL FLOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartlep, T.; Zhao, J.; Kosovichev, A. G.

    2013-01-10

    The meridional flow in the Sun is an axisymmetric flow that is generally directed poleward at the surface, and is presumed to be of fundamental importance in the generation and transport of magnetic fields. Its true shape and strength, however, are debated. We present a numerical simulation of helioseismic wave propagation in the whole solar interior in the presence of a prescribed, stationary, single-cell, deep meridional circulation serving as synthetic data for helioseismic measurement techniques. A deep-focusing time-distance helioseismology technique is applied to the synthetic data, showing that it can in fact be used to measure the effects of themore » meridional flow very deep in the solar convection zone. It is shown that the ray approximation that is commonly used for interpretation of helioseismology measurements remains a reasonable approximation even for very long distances between 12 Degree-Sign and 42 Degree-Sign corresponding to depths between 52 and 195 Mm. From the measurement noise, we extrapolate that time-resolved observations on the order of a full solar cycle may be needed to probe the flow all the way to the base of the convection zone.« less

  14. Alcohol-Related Content of Animated Cartoons: A Historical Perspective

    PubMed Central

    Klein, Hugh; Shiffman, Kenneth S.

    2013-01-01

    This study, based on a stratified (by decade of production) random sample of 1,221 animated cartoons and 4,201 characters appearing in those cartoons, seeks to determine the prevalence of alcohol-related content; how, if at all, the prevalence changed between 1930 and 1996 (the years spanned by this research); and the types of messages that animated cartoons convey about beverage alcohol and drinking in terms of the characteristics that are associated with alcohol use, the contexts in which alcohol is used in cartoons, and the reasons why cartoon characters purportedly consume alcohol. Approximately 1 cartoon in 11 was found to contain alcohol-related content, indicating that the average child or adolescent viewer is exposed to approximately 24 alcohol-related messages each week just from the cartoons that he/she watches. Data indicated that the prevalence of alcohol-related content declined significantly over the years. Quite often, alcohol consumption was shown to result in no effects whatsoever for the drinker, and alcohol use often occurred when characters were alone. Overall, mixed, ambivalent messages were provided about drinking and the types of characters that did/not consume alcoholic beverages. PMID:24350176

  15. Nuclear Pasta at Finite Temperature with the Time-Dependent Hartree-Fock Approach

    NASA Astrophysics Data System (ADS)

    Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.

    2016-01-01

    We present simulations of neutron-rich matter at sub-nuclear densities, like supernova matter. With the time-dependent Hartree-Fock approximation we can study the evolution of the system at temperatures of several MeV employing a full Skyrme interaction in a periodic three-dimensional grid [1]. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. The matter evolves into spherical, rod-like, connected rod-like and slab-like shapes. Further we observe gyroid-like structures, discussed e.g. in [2], which are formed spontaneously choosing a certain value of the simulation box length. The ρ-T-map of pasta shapes is basically consistent with the phase diagrams obtained from QMD calculations [3]. By an improved topological analysis based on Minkowski functionals [4], all observed pasta shapes can be uniquely identified by only two valuations, namely the Euler characteristic and the integral mean curvature. In addition we propose the variance in the cell-density distribution as a measure to distinguish pasta matter from uniform matter.

  16. A comparative study of an ABC and an artificial absorber for truncating finite element meshes

    NASA Technical Reports Server (NTRS)

    Oezdemir, T.; Volakis, John L.

    1993-01-01

    The type of mesh termination used in the context of finite element formulations plays a major role on the efficiency and accuracy of the field solution. The performance of an absorbing boundary condition (ABC) and an artificial absorber (a new concept) for terminating the finite element mesh was evaluated. This analysis is done in connection with the problem of scattering by a finite slot array in a thick ground plane. The two approximate mesh truncation schemes are compared with the exact finite element-boundary integral (FEM-BI) method in terms of accuracy and efficiency. It is demonstrated that both approximate truncation schemes yield reasonably accurate results even when the mesh is extended only 0.3 wavelengths away from the array aperture. However, the artificial absorber termination method leads to a substantially more efficient solution. Moreover, it is shown that the FEM-BI method remains quite competitive with the FEM-artificial absorber method when the FFT is used for computing the matrix-vector products in the iterative solution algorithm. These conclusions are indeed surprising and of major importance in electromagnetic simulations based on the finite element method.

  17. An architecture for a continuous, user-driven, and data-driven application of clinical guidelines and its evaluation.

    PubMed

    Shalom, Erez; Shahar, Yuval; Lunenfeld, Eitan

    2016-02-01

    Design, implement, and evaluate a new architecture for realistic continuous guideline (GL)-based decision support, based on a series of requirements that we have identified, such as support for continuous care, for multiple task types, and for data-driven and user-driven modes. We designed and implemented a new continuous GL-based support architecture, PICARD, which accesses a temporal reasoning engine, and provides several different types of application interfaces. We present the new architecture in detail in the current paper. To evaluate the architecture, we first performed a technical evaluation of the PICARD architecture, using 19 simulated scenarios in the preeclampsia/toxemia domain. We then performed a functional evaluation with the help of two domain experts, by generating patient records that simulate 60 decision points from six clinical guideline-based scenarios, lasting from two days to four weeks. Finally, 36 clinicians made manual decisions in half of the scenarios, and had access to the automated GL-based support in the other half. The measures used in all three experiments were correctness and completeness of the decisions relative to the GL. Mean correctness and completeness in the technical evaluation were 1±0.0 and 0.96±0.03 respectively. The functional evaluation produced only several minor comments from the two experts, mostly regarding the output's style; otherwise the system's recommendations were validated. In the clinically oriented evaluation, the 36 clinicians applied manually approximately 41% of the GL's recommended actions. Completeness increased to approximately 93% when using PICARD. Manual correctness was approximately 94.5%, and remained similar when using PICARD; but while 68% of the manual decisions included correct but redundant actions, only 3% of the actions included in decisions made when using PICARD were redundant. The PICARD architecture is technically feasible and is functionally valid, and addresses the realistic continuous GL-based application requirements that we have defined; in particular, the requirement for care over significant time frames. The use of the PICARD architecture in the domain we examined resulted in enhanced completeness and in reduction of redundancies, and is potentially beneficial for general GL-based management of chronic patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Machine Learning-based Intelligent Formal Reasoning and Proving System

    NASA Astrophysics Data System (ADS)

    Chen, Shengqing; Huang, Xiaojian; Fang, Jiaze; Liang, Jia

    2018-03-01

    The reasoning system can be used in many fields. How to improve reasoning efficiency is the core of the design of system. Through the formal description of formal proof and the regular matching algorithm, after introducing the machine learning algorithm, the system of intelligent formal reasoning and verification has high efficiency. The experimental results show that the system can verify the correctness of propositional logic reasoning and reuse the propositional logical reasoning results, so as to obtain the implicit knowledge in the knowledge base and provide the basic reasoning model for the construction of intelligent system.

  19. Dynamic reasoning in a knowledge-based system

    NASA Technical Reports Server (NTRS)

    Rao, Anand S.; Foo, Norman Y.

    1988-01-01

    Any space based system, whether it is a robot arm assembling parts in space or an onboard system monitoring the space station, has to react to changes which cannot be foreseen. As a result, apart from having domain-specific knowledge as in current expert systems, a space based AI system should also have general principles of change. This paper presents a modal logic which can not only represent change but also reason with it. Three primitive operations, expansion, contraction and revision are introduced and axioms which specify how the knowledge base should change when the external world changes are also specified. Accordingly the notion of dynamic reasoning is introduced, which unlike the existing forms of reasoning, provide general principles of change. Dynamic reasoning is based on two main principles, namely minimize change and maximize coherence. A possible-world semantics which incorporates the above two principles is also discussed. The paper concludes by discussing how the dynamic reasoning system can be used to specify actions and hence form an integral part of an autonomous reasoning and planning system.

  20. Testing actinide fission yield treatment in CINDER90 for use in MCNP6 burnup calculations

    DOE PAGES

    Fensin, Michael Lorne; Umbel, Marissa

    2015-09-18

    Most of the development of the MCNPX/6 burnup capability focused on features that were applied to the Boltzman transport or used to prepare coefficients for use in CINDER90, with little change to CINDER90 or the CINDER90 data. Though a scheme exists for best solving the coupled Boltzman and Bateman equations, the most significant approximation is that the employed nuclear data are correct and complete. Thus, the CINDER90 library file contains 60 different actinide fission yields encompassing 36 fissionable actinides (thermal, fast, high energy and spontaneous fission). Fission reaction data exists for more than 60 actinides and as a result, fissionmore » yield data must be approximated for actinides that do not possess fission yield information. Several types of approximations are used for estimating fission yields for actinides which do not possess explicit fission yield data. The objective of this study is to test whether or not certain approximations of fission yield selection have any impact on predictability of major actinides and fission products. Further we assess which other fission products, available in MCNP6 Tier 3, result in the largest difference in production. Because the CINDER90 library file is in ASCII format and therefore easily amendable, we assess reasons for choosing, as well as compare actinide and major fission product prediction for the H. B. Robinson benchmark for, three separate fission yield selection methods: (1) the current CINDER90 library file method (Base); (2) the element method (Element); and (3) the isobar method (Isobar). Results show that the three methods tested result in similar prediction of major actinides, Tc-99 and Cs-137; however, certain fission products resulted in significantly different production depending on the method of choice.« less

  1. A distributed agent architecture for real-time knowledge-based systems: Real-time expert systems project, phase 1

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel

    1990-01-01

    We propose a distributed agent architecture (DAA) that can support a variety of paradigms based on both traditional real-time computing and artificial intelligence. DAA consists of distributed agents that are classified into two categories: reactive and cognitive. Reactive agents can be implemented directly in Ada to meet hard real-time requirements and be deployed on on-board embedded processors. A traditional real-time computing methodology under consideration is the rate monotonic theory that can guarantee schedulability based on analytical methods. AI techniques under consideration for reactive agents are approximate or anytime reasoning that can be implemented using Bayesian belief networks as in Guardian. Cognitive agents are traditional expert systems that can be implemented in ART-Ada to meet soft real-time requirements. During the initial design of cognitive agents, it is critical to consider the migration path that would allow initial deployment on ground-based workstations with eventual deployment on on-board processors. ART-Ada technology enables this migration while Lisp-based technologies make it difficult if not impossible. In addition to reactive and cognitive agents, a meta-level agent would be needed to coordinate multiple agents and to provide meta-level control.

  2. Ammonia-based feedforward and feedback aeration control in activated sludge processes.

    PubMed

    Rieger, Leiv; Jones, Richard M; Dold, Peter L; Bott, Charles B

    2014-01-01

    Aeration control at wastewater treatment plants based on ammonia as the controlled variable is applied for one of two reasons: (1) to reduce aeration costs, or (2) to reduce peaks in effluent ammonia. Aeration limitation has proven to result in significant energy savings, may reduce external carbon addition, and can improve denitrification and biological phosphorus (bio-P) performance. Ammonia control for limiting aeration has been based mainly on feedback control to constrain complete nitrification by maintaining approximately one to two milligrams of nitrogen per liter of ammonia in the effluent. Increased attention has been given to feedforward ammonia control, where aeration control is based on monitoring influent ammonia load. Typically, the intent is to anticipate the impact of sudden load changes, and thereby reduce effluent ammonia peaks. This paper evaluates the fundamentals of ammonia control with a primary focus on feedforward control concepts. A case study discussion is presented that reviews different ammonia-based control approaches. In most instances, feedback control meets the objectives for both aeration limitation and containment of effluent ammonia peaks. Feedforward control, applied specifically for switching aeration on or off in swing zones, can be beneficial when the plant encounters particularly unusual influent disturbances.

  3. Experimental and theoretical triple differential cross sections for electron-impact ionization of Ar (3p) for equal energy final state electrons

    NASA Astrophysics Data System (ADS)

    Amami, Sadek; Ozer, Zehra N.; Dogan, Mevlut; Yavuz, Murat; Varol, Onur; Madison, Don

    2016-09-01

    There have been several studies of electron-impact ionization of inert gases for asymmetric final state energy sharing and normally one electron has an energy significantly higher than the other. However, there have been relatively few studies examining equal energy final state electrons. Here we report experimental and theoretical triple differential cross sections for electron impact ionization of Ar (3p) for equal energy sharing of the outgoing electrons. Previous experimental results combined with some new measurements are compared with distorted wave born approximation (DWBA) results, DWBA results using the Ward-Macek (WM) approximation for the post collision interaction (PCI), and three-body distorted wave (3DW) which includes PCI without approximation. The results show that it is crucially important to include PCI in the calculation particularly for lower energies and that the WM approximation is valid only for high energies. The 3DW, on the other hand, is in reasonably good agreement with data down to fairly low energies.

  4. Mean-field approximation for the Sznajd model in complex networks

    NASA Astrophysics Data System (ADS)

    Araújo, Maycon S.; Vannucchi, Fabio S.; Timpanaro, André M.; Prado, Carmen P. C.

    2015-02-01

    This paper studies the Sznajd model for opinion formation in a population connected through a general network. A master equation describing the time evolution of opinions is presented and solved in a mean-field approximation. Although quite simple, this approximation allows us to capture the most important features regarding the steady states of the model. When spontaneous opinion changes are included, a discontinuous transition from consensus to polarization can be found as the rate of spontaneous change is increased. In this case we show that a hybrid mean-field approach including interactions between second nearest neighbors is necessary to estimate correctly the critical point of the transition. The analytical prediction of the critical point is also compared with numerical simulations in a wide variety of networks, in particular Barabási-Albert networks, finding reasonable agreement despite the strong approximations involved. The same hybrid approach that made it possible to deal with second-order neighbors could just as well be adapted to treat other problems such as epidemic spreading or predator-prey systems.

  5. Morphology and mixing state of aged soot particles at a remote marine free troposphere site: Implications for optical properties

    DOE PAGES

    China, Swarup; Scarnato, Barbara; Owen, Robert C.; ...

    2015-01-14

    The radiative properties of soot particles depend on their morphology and mixing state, but their evolution during transport is still elusive. In this paper, we report observations from an electron microscopy analysis of individual particles transported in the free troposphere over long distances to the remote Pico Mountain Observatory in the Azores in the North Atlantic. Approximately 70% of the soot particles were highly compact and of those 26% were thinly coated. Discrete dipole approximation simulations indicate that this compaction results in an increase in soot single scattering albedo by a factor of ≤2.17. The top of the atmosphere directmore » radiative forcing is typically smaller for highly compact than mass-equivalent lacy soot. Lastly, the forcing estimated using Mie theory is within 12% of the forcing estimated using the discrete dipole approximation for a high surface albedo, implying that Mie calculations may provide a reasonable approximation for compact soot above remote marine clouds.« less

  6. The best-fit universe. [cosmological models

    NASA Technical Reports Server (NTRS)

    Turner, Michael S.

    1991-01-01

    Inflation provides very strong motivation for a flat Universe, Harrison-Zel'dovich (constant-curvature) perturbations, and cold dark matter. However, there are a number of cosmological observations that conflict with the predictions of the simplest such model: one with zero cosmological constant. They include the age of the Universe, dynamical determinations of Omega, galaxy-number counts, and the apparent abundance of large-scale structure in the Universe. While the discrepancies are not yet serious enough to rule out the simplest and most well motivated model, the current data point to a best-fit model with the following parameters: Omega(sub B) approximately equal to 0.03, Omega(sub CDM) approximately equal to 0.17, Omega(sub Lambda) approximately equal to 0.8, and H(sub 0) approximately equal to 70 km/(sec x Mpc) which improves significantly the concordance with observations. While there is no good reason to expect such a value for the cosmological constant, there is no physical principle that would rule out such.

  7. Influence of proportional number relationships on item accessibility and students' strategies

    NASA Astrophysics Data System (ADS)

    Carney, Michele B.; Smith, Everett; Hughes, Gwyneth R.; Brendefur, Jonathan L.; Crawford, Angela

    2016-12-01

    Proportional reasoning is important to students' future success in mathematics and science endeavors. More specifically, students' fluent and flexible use of scalar and functional relationships to solve problems is critical to their ability to reason proportionally. The purpose of this study is to investigate the influence of systematically manipulating the location of an integer multiplier—to press the scalar or functional relationship—on item difficulty and student solution strategies. We administered short-answer assessment forms to 473 students in grades 6-8 (approximate ages 11-14) and analyzed the data quantitatively with the Rasch model to examine item accessibility and qualitatively to examine student solution strategies. We found that manipulating the location of the integer multiplier encouraged students to make use of different aspects of proportional relationships without decreasing item accessibility. Implications for proportional reasoning curricular materials, instruction, and assessment are addressed.

  8. Emotional reasoning and parent-based reasoning in normal children.

    PubMed

    Morren, Mattijn; Muris, Peter; Kindt, Merel

    2004-01-01

    A previous study by Muris, Merckelbach, and Van Spauwen demonstrated that children display emotional reasoning irrespective of their anxiety levels. That is, when estimating whether a situation is dangerous, children not only rely on objective danger information but also on their own anxiety-response. The present study further examined emotional reasoning in children aged 7-13 years (N = 508). In addition, it was investigated whether children also show parent-based reasoning, which can be defined as the tendency to rely on anxiety-responses that can be observed in parents. Children completed self-report questionnaires of anxiety, depression, and emotional and parent-based reasoning. Evidence was found for both emotional and parent-based reasoning effects. More specifically, children's danger ratings were not only affected by objective danger information, but also by anxiety-response information in both objective danger and safety stories. High levels of anxiety and depression were significantly associated with the tendency to rely on anxiety-response information, but only in the case of safety scripts.

  9. Delusional Ideation, Cognitive Processes and Crime Based Reasoning.

    PubMed

    Wilkinson, Dean J; Caulfield, Laura S

    2017-08-01

    Probabilistic reasoning biases have been widely associated with levels of delusional belief ideation (Galbraith, Manktelow, & Morris, 2010; Lincoln, Ziegler, Mehl, & Rief, 2010; Speechley, Whitman, & Woodward, 2010; White & Mansell, 2009), however, little research has focused on biases occurring during every day reasoning (Galbraith, Manktelow, & Morris, 2011), and moral and crime based reasoning (Wilkinson, Caulfield, & Jones, 2014; Wilkinson, Jones, & Caulfield, 2011). 235 participants were recruited across four experiments exploring crime based reasoning through different modalities and dual processing tasks. Study one explored delusional ideation when completing a visually presented crime based reasoning task. Study two explored the same task in an auditory presentation. Study three utilised a dual task paradigm to explore modality and executive functioning. Study four extended this paradigm to the auditory modality. The results indicated that modality and delusional ideation have a significant effect on individuals reasoning about violent and non-violent crime (p < .05), which could have implication for the presentation of evidence in applied setting such as the courtroom.

  10. Delusional Ideation, Cognitive Processes and Crime Based Reasoning

    PubMed Central

    Wilkinson, Dean J.; Caulfield, Laura S.

    2017-01-01

    Probabilistic reasoning biases have been widely associated with levels of delusional belief ideation (Galbraith, Manktelow, & Morris, 2010; Lincoln, Ziegler, Mehl, & Rief, 2010; Speechley, Whitman, & Woodward, 2010; White & Mansell, 2009), however, little research has focused on biases occurring during every day reasoning (Galbraith, Manktelow, & Morris, 2011), and moral and crime based reasoning (Wilkinson, Caulfield, & Jones, 2014; Wilkinson, Jones, & Caulfield, 2011). 235 participants were recruited across four experiments exploring crime based reasoning through different modalities and dual processing tasks. Study one explored delusional ideation when completing a visually presented crime based reasoning task. Study two explored the same task in an auditory presentation. Study three utilised a dual task paradigm to explore modality and executive functioning. Study four extended this paradigm to the auditory modality. The results indicated that modality and delusional ideation have a significant effect on individuals reasoning about violent and non-violent crime (p < .05), which could have implication for the presentation of evidence in applied setting such as the courtroom. PMID:28904598

  11. Secular Climate Change on Mars: An Update Using One Mars Year of MSL Pressure Data

    NASA Technical Reports Server (NTRS)

    Haberle, R. M.; Gomez-Elvira, J.; de la Torre Juarez, M.; Harri, A-M.; Hollingsworth, J. L.; Kahanpaa, H.; Kahre, M. A.; Lemmon, M.; Martin-Torres, F. J.; Mischna, M.; hide

    2014-01-01

    The South Polar Residual Cap (SPRC) on Mars is an icy reservoir of CO2. If all the CO2 trapped in the SPRC were released to the atmosphere the mean annual global surface pressure would rise by approximately 20 Pa. Repeated MOC and HiRISE imaging of scarp retreat within the SPRC led to suggestions that the SPRC is losing mass. Estimates for the loss rate vary between 0. 5 Pa per Mars Decade to 13 Pa per Mars Decade. Assuming 80% of this loss goes directly into the atmosphere, an estimate based on some modeling (Haberle and Kahre, 2010), and that the loss is monotonic, the global annual mean surface pressure should have increased between approximately 1-20 Pa since the Viking mission (approximately 20 Mars years ago). Surface pressure measurements by the Phoenix Lander only 2.5 Mars years ago were found to be consistent with these loss rates. Last year at this meeting we compared surface pressure data from the MSL mission through sol 360 with that from Viking Lander 2 (VL-2) for the same period to determine if the trend continues. The results were ambiguous. This year we have a full Mars year of MSL data to work with. Using the Ames GCM to compensate for dynamics and environmental differences, our analysis suggests that the mean annual pressure has decreased by approximately 8 Pa since Viking. This result implies that the SPRC has gained (not lost) mass since Viking. However, the estimated uncertainties in our analysis are easily at the 10 Pa level and possibly higher. Chief among these are the hydrostatic adjustment of surface pressure from grid point elevations to actual elevations and the simulated regional environmental conditions at the lander sites. For these reasons, the most reasonable conclusion is that there is no significant difference in the size of the atmosphere between now and Viking. This implies, but does not demand, that the mass of the SPRC has not changed since Viking. Of course, year-to-year variations are possible as implied by the Phoenix data. Given that there has been no unusual behavior in the climate system as observed by a variety of spacecraft at Mars since Phoenix, its seems more likely that the Phoenix data simply did not have a long enough record to accurately determine annual mean pressure changes as Haberle and Kahre (2010) cautioned. In the absence of a strong signal in the MSL data, we conclude that if the SPRC is loosing mass it is not going into the atmosphere reservoir.

  12. Coherent Anomaly Method Calculation on the Cluster Variation Method. II. Critical Exponents of Bond Percolation Model

    NASA Astrophysics Data System (ADS)

    Wada, Koh; Watanabe, Naotosi; Uchida, Tetsuya

    1991-10-01

    The critical exponents of the bond percolation model are calculated in the D(=2, 3, \\cdots)-dimensional simple cubic lattice on the basis of Suzuki’s coherent anomaly method (CAM) by making use of a series of the pair, the square-cactus and the square approximations of the cluster variation method (CVM) in the s-state Potts model. These simple approximations give reasonable values of critical exponents α, β, γ and ν in comparison with ones estimated by other methods. It is also shown that the results of the pair and the square-cactus approximations can be derived as exact results of the bond percolation model on the Bethe and the square-cactus lattice, respectively, in the presence of ghost field without recourse to the s→1 limit of the s-state Potts model.

  13. Lower molar and incisor displacement associated with mandibular remodeling.

    PubMed

    Baumrind, S; Bravo, L A; Ben-Bassat, Y; Curry, S; Korn, E L

    1997-01-01

    The purpose of this study was to quantify the amount of alveolar modeling at the apices of the mandibular incisor and first molar specifically associated with appositional and resorptive changes on the lower border of the mandible during growth and treatment. Cephalometric data from superimpositions on anterior cranial base, mandibular implants of the Björk type, and anatomical "best fit" of mandibular border structures were integrated using a recently developed strategy, which is described. Data were available at annual intervals between 8.5 and 15.5 years for a previously described sample of approximately 30 children with implants. The average magnitudes of the changes at the root apices of the mandibular first molar and central incisor associated with modeling/remodeling of the mandibular border and symphysis were unexpectedly small. At the molar apex, mean values approximated zero in both anteroposterior and vertical directions. At the incisor apex, mean values approximated zero in the anteroposterior direction and averaged less than 0.15 mm/year in the vertical direction. Standard deviations were roughly equal for the molar and the incisor in both the anteroposterior and vertical directions. Dental displacement associated with surface modeling plays a smaller role in final tooth position in the mandible than in the maxilla. It may also be reasonably inferred that anatomical best-fit superimpositions made in the absence of implants give a more complete picture of hard tissue turnover in the mandible than they do in the maxilla.

  14. Utilization of advanced calibration techniques in stochastic rock fall analysis of quarry slopes

    NASA Astrophysics Data System (ADS)

    Preh, Alexander; Ahmadabadi, Morteza; Kolenprat, Bernd

    2016-04-01

    In order to study rock fall dynamics, a research project was conducted by the Vienna University of Technology and the Austrian Central Labour Inspectorate (Federal Ministry of Labour, Social Affairs and Consumer Protection). A part of this project included 277 full-scale drop tests at three different quarries in Austria and recording key parameters of the rock fall trajectories. The tests involved a total of 277 boulders ranging from 0.18 to 1.8 m in diameter and from 0.009 to 8.1 Mg in mass. The geology of these sites included strong rock belonging to igneous, metamorphic and volcanic types. In this paper the results of the tests are used for calibration and validation a new stochastic computer model. It is demonstrated that the error of the model (i.e. the difference between observed and simulated results) has a lognormal distribution. Selecting two parameters, advanced calibration techniques including Markov Chain Monte Carlo Technique, Maximum Likelihood and Root Mean Square Error (RMSE) are utilized to minimize the error. Validation of the model based on the cross validation technique reveals that in general, reasonable stochastic approximations of the rock fall trajectories are obtained in all dimensions, including runout, bounce heights and velocities. The approximations are compared to the measured data in terms of median, 95% and maximum values. The results of the comparisons indicate that approximate first-order predictions, using a single set of input parameters, are possible and can be used to aid practical hazard and risk assessment.

  15. A Fokker-Planck based kinetic model for diatomic rarefied gas flows

    NASA Astrophysics Data System (ADS)

    Gorji, M. Hossein; Jenny, Patrick

    2013-06-01

    A Fokker-Planck based kinetic model is presented here, which also accounts for internal energy modes characteristic for diatomic gas molecules. The model is based on a Fokker-Planck approximation of the Boltzmann equation for monatomic molecules, whereas phenomenological principles were employed for the derivation. It is shown that the model honors the equipartition theorem in equilibrium and fulfills the Landau-Teller relaxation equations for internal degrees of freedom. The objective behind this approximate kinetic model is accuracy at reasonably low computational cost. This can be achieved due to the fact that the resulting stochastic differential equations are continuous in time; therefore, no collisions between the simulated particles have to be calculated. Besides, because of the devised energy conserving time integration scheme, it is not required to resolve the collisional scales, i.e., the mean collision time and the mean free path of molecules. This, of course, gives rise to much more efficient simulations with respect to other particle methods, especially the conventional direct simulation Monte Carlo (DSMC), for small and moderate Knudsen numbers. To examine the new approach, first the computational cost of the model was compared with respect to DSMC, where significant speed up could be obtained for small Knudsen numbers. Second, the structure of a high Mach shock (in nitrogen) was studied, and the good performance of the model for such out of equilibrium conditions could be demonstrated. At last, a hypersonic flow of nitrogen over a wedge was studied, where good agreement with respect to DSMC (with level to level transition model) for vibrational and translational temperatures is shown.

  16. Measurements of Unexpected Ozone Loss in a Nighttime Space Shuttle Exhaust Plume: Implications for Geo-Engineering Projects

    NASA Astrophysics Data System (ADS)

    Avallone, L. M.; Kalnajs, L. E.; Toohey, D. W.; Ross, M. N.

    2008-12-01

    Measurements of ozone, carbon dioxide and particulate water were made in the nighttime exhaust plume of the Space Shuttle (STS-116) on 9 December 2006 as part of the PUMA/WAVE campaign (Plume Ultrafast Measurements Acquisition/WB-57F Ascent Video Experiment). The launch took place from Kennedy Space Center at 8:47 pm (local time) on a moonless night and the WB-57F aircraft penetrated the shuttle plume approximately 25 minutes after launch in the lowermost stratosphere. Ozone loss is not predicted to occur in a nighttime Space Shuttle plume since it has long been assumed that the main ozone loss mechanism associated with rocket emissions requires solar photolysis to drive several chlorine-based catalytic cycles. However, the nighttime in situ observations show an unexpected loss of ozone of approximately 250 ppb in the evolving exhaust plume, inconsistent with model predictions. We will present the observations of the shuttle exhaust plume composition and the results of photochemical models of the Space Shuttle plume. We will show that models constrained by known rocket emission kinetics, including afterburning, and reasonable plume dispersion rates, based on the CO2 observations, cannot explain the observed ozone loss. We will propose potential explanations for the lack of agreement between models and the observations, and will discuss the implications of these explanations for our understanding of the composition of rocket emissions. We will describe the potential consequences of the observed ozone loss for long-term damage to the stratospheric ozone layer should geo-engineering projects based on rocket launches be employed.

  17. Decision blocks: A tool for automating decision making in CLIPS

    NASA Technical Reports Server (NTRS)

    Eick, Christoph F.; Mehta, Nikhil N.

    1991-01-01

    The human capability of making complex decision is one of the most fascinating facets of human intelligence, especially if vague, judgemental, default or uncertain knowledge is involved. Unfortunately, most existing rule based forward chaining languages are not very suitable to simulate this aspect of human intelligence, because of their lack of support for approximate reasoning techniques needed for this task, and due to the lack of specific constructs to facilitate the coding of frequently reoccurring decision block to provide better support for the design and implementation of rule based decision support systems. A language called BIRBAL, which is defined on the top of CLIPS, for the specification of decision blocks, is introduced. Empirical experiments involving the comparison of the length of CLIPS program with the corresponding BIRBAL program for three different applications are surveyed. The results of these experiments suggest that for decision making intensive applications, a CLIPS program tends to be about three times longer than the corresponding BIRBAL program.

  18. The Forecast Interpretation Tool—a Monte Carlo technique for blending climatic distributions with probabilistic forecasts

    USGS Publications Warehouse

    Husak, Gregory J.; Michaelsen, Joel; Kyriakidis, P.; Verdin, James P.; Funk, Chris; Galu, Gideon

    2011-01-01

    Probabilistic forecasts are produced from a variety of outlets to help predict rainfall, and other meteorological events, for periods of 1 month or more. Such forecasts are expressed as probabilities of a rainfall event, e.g. being in the upper, middle, or lower third of the relevant distribution of rainfall in the region. The impact of these forecasts on the expectation for the event is not always clear or easily conveyed. This article proposes a technique based on Monte Carlo simulation for adjusting existing climatologic statistical parameters to match forecast information, resulting in new parameters defining the probability of events for the forecast interval. The resulting parameters are shown to approximate the forecasts with reasonable accuracy. To show the value of the technique as an application for seasonal rainfall, it is used with consensus forecast developed for the Greater Horn of Africa for the 2009 March-April-May season. An alternative, analytical approach is also proposed, and discussed in comparison to the first simulation-based technique.

  19. Young's moduli of carbon materials investigated by various classical molecular dynamics schemes

    NASA Astrophysics Data System (ADS)

    Gayk, Florian; Ehrens, Julian; Heitmann, Tjark; Vorndamme, Patrick; Mrugalla, Andreas; Schnack, Jürgen

    2018-05-01

    For many applications classical carbon potentials together with classical molecular dynamics are employed to calculate structures and physical properties of such carbon-based materials where quantum mechanical methods fail either due to the excessive size, irregular structure or long-time dynamics. Although such potentials, as for instance implemented in LAMMPS, yield reasonably accurate bond lengths and angles for several carbon materials such as graphene, it is not clear how accurate they are in terms of mechanical properties such as for instance Young's moduli. We performed large-scale classical molecular dynamics investigations of three carbon-based materials using the various potentials implemented in LAMMPS as well as the EDIP potential of Marks. We show how the Young's moduli vary with classical potentials and compare to experimental results. Since classical descriptions of carbon are bound to be approximations it is not astonishing that different realizations yield differing results. One should therefore carefully check for which observables a certain potential is suited. Our aim is to contribute to such a clarification.

  20. Field Evaluations of Tracking/Locating Technologies for Prevention of Missing Incidents.

    PubMed

    Bulat, Tatjana; Kerrigan, Michael V; Rowe, Meredeth; Kearns, William; Craighead, Jeffrey D; Ramaiah, Padmaja

    2016-09-01

    Persons with dementia are at risk of a missing incident, which is defined as an instance in which a demented person's whereabouts are unknown to the caregiver and the individual is not in an expected location. Since it is critical to determine the missing person's location as quickly as possible, we evaluated whether commercially available tracking technologies can assist in a rapid recovery. This study examined 7 commercially available tracking devices: 3 radio frequency (RF) based and 4 global positioning system (GPS) based, employing realistic tracking scenarios. Outcome measures were time to discovery and degree of deviation from a straight intercept course. Across all scenarios tested, GPS devices were found to be approximately twice as efficient as the RF devices in locating a "missing person." While the RF devices showed reasonable performance at close proximity, the GPS devices were found to be more appropriate overall for tracking/locating missing persons over unknown and larger distances. © The Author(s) 2016.

  1. A 96-well-plate-based optical method for the quantitative and qualitative evaluation of Pseudomonas aeruginosa biofilm formation and its application to susceptibility testing.

    PubMed

    Müsken, Mathias; Di Fiore, Stefano; Römling, Ute; Häussler, Susanne

    2010-08-01

    A major reason for bacterial persistence during chronic infections is the survival of bacteria within biofilm structures, which protect cells from environmental stresses, host immune responses and antimicrobial therapy. Thus, there is concern that laboratory methods developed to measure the antibiotic susceptibility of planktonic bacteria may not be relevant to chronic biofilm infections, and it has been suggested that alternative methods should test antibiotic susceptibility within a biofilm. In this paper, we describe a fast and reliable protocol for using 96-well microtiter plates for the formation of Pseudomonas aeruginosa biofilms; the method is easily adaptable for antimicrobial susceptibility testing. This method is based on bacterial viability staining in combination with automated confocal laser scanning microscopy. The procedure simplifies qualitative and quantitative evaluation of biofilms and has proven to be effective for standardized determination of antibiotic efficiency on P. aeruginosa biofilms. The protocol can be performed within approximately 60 h.

  2. Benzimidazobenzothiazole-based highly-efficient thermally activated delayed fluorescence emitters for organic light-emitting diodes: A quantum-chemical TD-DFT study

    NASA Astrophysics Data System (ADS)

    Zhu, Qiuling; Wen, Keke; Feng, Songyan; Guo, Xugeng; Zhang, Jinglai

    2018-03-01

    Based upon two thermally activated delayed fluorescence (TADF) emitters 1 and 2, compounds 3-6 have been designed by replacing the carbazol group with the bis(4-biphenyl)amine one (3 and 4) and introducing the electron-withdrawing CF3 group into the acceptor unit of 3 and 4 (5 and 6). It is found that the present calculations predict comparable but relatively large energy differences (approximate 0.5 eV) between the lowest singlet S1 and triplet T1 states (Δ EST) for the six targeted compounds. In order to explain the highly-efficient TADF behavior observed in compounds 1 and 2, the"triplet reservoir" mechanism has been proposed. In addition, the fluorescence rates of all six compounds are very large, in 107-108 orders of magnitude. According to the present calculations, it is a reasonable assumption that the newly designed compounds 3-6 could be considered as the potential TADF emitters, which needs to be further verified by experimental techniques.

  3. A variation-perturbation method for atomic and molecular interactions. I - Theory. II - The interaction potential and van der Waals molecule for Ne-HF

    NASA Astrophysics Data System (ADS)

    Gallup, G. A.; Gerratt, J.

    1985-09-01

    The van der Waals energy between the two parts of a system is a very small fraction of the total electronic energy. In such cases, calculations have been based on perturbation theory. However, such an approach involves certain difficulties. For this reason, van der Waals energies have also been directly calculated from total energies. But such a method has definite limitations as to the size of systems which can be treated, and recently ab initio calculations have been combined with damped semiempirical long-range dispersion potentials to treat larger systems. In this procedure, large basis set superposition errors occur, which must be removed by the counterpoise method. The present investigation is concerned with an approach which is intermediate between the previously considered procedures. The first step in the new approach involves a variational calculation based upon valence bond functions. The procedure includes also the optimization of excited orbitals, and an approximation of atomic integrals and Hamiltonian matrix elements.

  4. Three-Dimensional MHD Modeling of The Solar Corona and Solar Wind: Comparison with The Wang-Sheeley Model

    NASA Technical Reports Server (NTRS)

    Usmanov, A. V.; Goldstein, M. L.

    2003-01-01

    We present simulation results from a tilted-dipole steady-state MHD model of the solar corona and solar wind and compare the output from our model with the Wang-Sheeley model which relates the divergence rate of magnetic flux tubes near the Sun (inferred from solar magnetograms) to the solar wind speed observed near Earth and at Ulysses. The boundary conditions in our model specified at the coronal base and our simulation region extends out to 10 AU. We assumed that a flux of Alfven waves with amplitude of 35 km per second emanates from the Sun and provides additional heating and acceleration for the coronal outflow in the open field regions. The waves are treated in the WKB approximation. The incorporation of wave acceleration allows us to reproduce the fast wind measurements obtained by Ulysses, while preserving reasonable agreement with plasma densities typically found at the coronal base. We find that our simulation results agree well with Wang and Sheeley's empirical model.

  5. Spin Hartree-Fock approach to studying quantum Heisenberg antiferromagnets in low dimensions

    NASA Astrophysics Data System (ADS)

    Werth, A.; Kopietz, P.; Tsyplyatyev, O.

    2018-05-01

    We construct a new mean-field theory for a quantum (spin-1/2) Heisenberg antiferromagnet in one (1D) and two (2D) dimensions using a Hartree-Fock decoupling of the four-point correlation functions. We show that the solution to the self-consistency equations based on two-point correlation functions does not produce any unphysical finite-temperature phase transition, in accord with the Mermin-Wagner theorem, unlike the common approach based on the mean-field equation for the order parameter. The next-neighbor spin-spin correlation functions, calculated within this approach, reproduce closely the strong renormalization by quantum fluctuations obtained via a Bethe ansatz in 1D and a small renormalization of the classical antiferromagnetic state in 2D. The heat capacity approximates with reasonable accuracy the full Bethe ansatz result at all temperatures in 1D. In 2D, we obtain a reduction of the peak height in the heat capacity at a finite temperature that is accessible by high-order 1 /T expansions.

  6. Learning and tuning fuzzy logic controllers through reinforcements

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Khedkar, Pratap

    1992-01-01

    A new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. In particular, our Generalized Approximate Reasoning-based Intelligent Control (GARIC) architecture: (1) learns and tunes a fuzzy logic controller even when only weak reinforcements, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto, Sutton, and Anderson to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and has demonstrated significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.

  7. Kernel Temporal Differences for Neural Decoding

    PubMed Central

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  8. Long-range corrected density functional through the density matrix expansion based semilocal exchange hole.

    PubMed

    Patra, Bikash; Jana, Subrata; Samal, Prasanjit

    2018-03-28

    The exchange hole, which is one of the principal constituents of the density functional formalism, can be used to design accurate range-separated hybrid functionals in association with appropriate correlation. In this regard, the exchange hole derived from the density matrix expansion has gained attention due to its fulfillment of some of the desired exact constraints. Thus, the new long-range corrected density functional proposed here combines the meta generalized gradient approximation level exchange functional designed from the density matrix expansion based exchange hole coupled with the ab initio Hartree-Fock exchange through the range separation of the Coulomb interaction operator using the standard error function technique. Then, in association with the Lee-Yang-Parr correlation functional, the assessment and benchmarking of the above newly constructed range-separated functional with various well-known test sets shows its reasonable performance for a broad range of molecular properties, such as thermochemistry, non-covalent interaction and barrier heights of the chemical reactions.

  9. Prediction of brittleness based on anisotropic rock physics model for kerogen-rich shale

    NASA Astrophysics Data System (ADS)

    Qian, Ke-Ran; He, Zhi-Liang; Chen, Ye-Quan; Liu, Xi-Wu; Li, Xiang-Yang

    2017-12-01

    The construction of a shale rock physics model and the selection of an appropriate brittleness index ( BI) are two significant steps that can influence the accuracy of brittleness prediction. On one hand, the existing models of kerogen-rich shale are controversial, so a reasonable rock physics model needs to be built. On the other hand, several types of equations already exist for predicting the BI whose feasibility needs to be carefully considered. This study constructed a kerogen-rich rock physics model by performing the selfconsistent approximation and the differential effective medium theory to model intercoupled clay and kerogen mixtures. The feasibility of our model was confirmed by comparison with classical models, showing better accuracy. Templates were constructed based on our model to link physical properties and the BI. Different equations for the BI had different sensitivities, making them suitable for different types of formations. Equations based on Young's Modulus were sensitive to variations in lithology, while those using Lame's Coefficients were sensitive to porosity and pore fluids. Physical information must be considered to improve brittleness prediction.

  10. [Study on suitable distribution areas of Grifola umbellate in Sichuan province based on remote sensing and GIS].

    PubMed

    Zhang, You; Wang, Juan; Zhang, Jie; Peng, Wen-Fu; Xu, Xin-Liang; Fang, Qing-Mao

    2016-09-01

    Grifola umbellate is the important medicinal materials in China which has a very high medicinal value. This study analyzedthe suitable distribution areasof G. umbellate and provided scientific basis for determining G. umbellate planting regions and planning production distribution reasonably. The suitable distribution areas of G. umbellate in Sichuan province was researched based on TM, ETM+, and DEM data,the key ecological factors that affect the growth of G. umbellate were extracted, including elevation, slope, aspect, average annual temperature,average annual precipitation,forest information,soil information, following remote sensing and GIS techniques, combining field researchdata. The results showed that the G. umbellate resources in Sichuan province were mainly distributed in Pingwu, Beichuan, Licountry, Yanyuan, Xichang, Dechang, Yanbian, Miyi, Huidong, Panzhihua and so on, the suitability distribution areas is 276.214 4 km² approximately and accounting for more than 0.143 3% of the total area.According to the related document information and the field investigation, showed that the suitability distribution based on RS and GIS were corresponded with the actual distribution areas of G. umbellate. Copyright© by the Chinese Pharmaceutical Association.

  11. Application of two direct runoff prediction methods in Puerto Rico

    USGS Publications Warehouse

    Sepulveda, N.

    1997-01-01

    Two methods for predicting direct runoff from rainfall data were applied to several basins and the resulting hydrographs compared to measured values. The first method uses a geomorphology-based unit hydrograph to predict direct runoff through its convolution with the excess rainfall hyetograph. The second method shows how the resulting hydraulic routing flow equation from a kinematic wave approximation is solved using a spectral method based on the matrix representation of the spatial derivative with Chebyshev collocation and a fourth-order Runge-Kutta time discretization scheme. The calibrated Green-Ampt (GA) infiltration parameters are obtained by minimizing the sum, over several rainfall events, of absolute differences between the total excess rainfall volume computed from the GA equations and the total direct runoff volume computed from a hydrograph separation technique. The improvement made in predicting direct runoff using a geomorphology-based unit hydrograph with the ephemeral and perennial stream network instead of the strictly perennial stream network is negligible. The hydraulic routing scheme presented here is highly accurate in predicting the magnitude and time of the hydrograph peak although the much faster unit hydrograph method also yields reasonable results.

  12. Comparison of algorithms to quantify muscle fatigue in upper limb muscles based on sEMG signals.

    PubMed

    Kahl, Lorenz; Hofmann, Ulrich G

    2016-11-01

    This work compared the performance of six different fatigue detection algorithms quantifying muscle fatigue based on electromyographic signals. Surface electromyography (sEMG) was obtained by an experiment from upper arm contractions at three different load levels from twelve volunteers. Fatigue detection algorithms mean frequency (MNF), spectral moments ratio (SMR), the wavelet method WIRM1551, sample entropy (SampEn), fuzzy approximate entropy (fApEn) and recurrence quantification analysis (RQA%DET) were calculated. The resulting fatigue signals were compared considering the disturbances incorporated in fatiguing situations as well as according to the possibility to differentiate the load levels based on the fatigue signals. Furthermore we investigated the influence of the electrode locations on the fatigue detection quality and whether an optimized channel set is reasonable. The results of the MNF, SMR, WIRM1551 and fApEn algorithms fell close together. Due to the small amount of subjects in this study significant differences could not be found. In terms of disturbances the SMR algorithm showed a slight tendency to out-perform the others. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  13. Should there be a target level of docosahexaenoic acid in breast milk?

    PubMed

    Jackson, Kristina Harris; Harris, William S

    2016-03-01

    This article examines the evidence for and against establishing a target level of docosahexaenoic acid (DHA) in breast milk. Two target levels for milk DHA have been recently proposed. One (∼0.3% of milk fatty acids) was based on milk DHA levels achieved in women consuming the amount of DHA recommended by the American Academy of Pediatrics for pregnant and lactating women (at least 200 mg DHA/day). Another (∼1.0%) was based on biomarker studies of populations with differing lifelong intakes of fish. Populations or research cohorts with milk DHA levels of 1.0% are associated with intakes that allow both the mother and infant to maintain relatively high DHA levels throughout lactation. Lower milk DHA levels may signal suboptimal maternal stores and possibly suboptimal infant intakes. Based on the current data, a reasonable milk DHA target appears to be approximately 0.3%, which is about the worldwide average. Although this may not be the 'optimal' level (which remains to be defined), it is clearly an improvement over the currently low milk DHA levels (∼0.2%) seen in many Western populations.

  14. Application of fuzzy logic-neural network based reinforcement learning to proximity and docking operations: Translational controller results

    NASA Technical Reports Server (NTRS)

    Jani, Yashvant

    1992-01-01

    The reinforcement learning techniques developed at Ames Research Center are being applied to proximity and docking operations using the Shuttle and Solar Maximum Mission (SMM) satellite simulation. In utilizing these fuzzy learning techniques, we also use the Approximate Reasoning based Intelligent Control (ARIC) architecture, and so we use two terms interchangeable to imply the same. This activity is carried out in the Software Technology Laboratory utilizing the Orbital Operations Simulator (OOS). This report is the deliverable D3 in our project activity and provides the test results of the fuzzy learning translational controller. This report is organized in six sections. Based on our experience and analysis with the attitude controller, we have modified the basic configuration of the reinforcement learning algorithm in ARIC as described in section 2. The shuttle translational controller and its implementation in fuzzy learning architecture is described in section 3. Two test cases that we have performed are described in section 4. Our results and conclusions are discussed in section 5, and section 6 provides future plans and summary for the project.

  15. Middle School Educators Describe the Process of Revitalization after Having Experienced Burnout

    ERIC Educational Resources Information Center

    Terreros, Angela W.

    2017-01-01

    The need to understand burnout and revitalization among educators is increasing. In the United States, approximately 46% of all new educators leave the profession within the first five years, and many blame burnout as the reason they leave the profession. While many researchers study to understand the burnout phenomenon, few press on to understand…

  16. Influence of Structured Group Experience on Moral Judgments of Preschoolers.

    ERIC Educational Resources Information Center

    Moran, James D., III; O'Brien, Gayle

    This study examines the influence of social experiences received in a group-care setting on the development of moral reasoning in young children. Thirty-five children approximately 4 years old, participated in the study. Twenty of the subjects attended day care or nursery school; the remaining 15 did not attend any group-care programs. Each child…

  17. Activities Selected from the High School Geography Project.

    ERIC Educational Resources Information Center

    Natoli, Salvatore J., Ed.; And Others

    Out of approximately 50 activities which were, for a variety of reasons, not included in the final version of the High School Geography Project course, Geography in an Urban Age, the HSGP staff selected eight which would be useful in many secondary school classrooms. The activities included here are: 1) Operation Bigger Beef (on themes of cultural…

  18. Developing MOOCs to Narrow the College Readiness Gap: Challenges and Recommendations for a Writing Course

    ERIC Educational Resources Information Center

    Bandi-Rao, Shoba; Devers, Christopher J.

    2015-01-01

    Massive Open Online Courses (MOOCs) have demonstrated the potential to deliver quality and cost effective course materials to large numbers of students. Approximately 60% of first-year students at community colleges are underprepared for college-level coursework. One reason for low graduation rates is the lack of the overall college readiness.…

  19. Calculation of Local Volume Factors for Relascope Cruising

    Treesearch

    Charles B. Briscoe

    1957-01-01

    In these days of climbing stumpage prices it is frequently desirable to attain more precision from a relascope cruise than is possible using ready-made volume factors. Like any factors made to be approximately applicalble over a wide range of conditions, volume factors may give very misleading results under certain local condition. For this reason it is desirable to...

  20. Michael's Inform Test of Student Ability (M.I.T.O.S.A.). Tester's Manual.

    ERIC Educational Resources Information Center

    Grafius, Thomas M.

    Michael's Informal Test of Student Ability (MITOSA) is a diagnostic evaluative tool for adult students designed to test nine skills abilities in adult students functioning below a tenth grade level. The nine test sections are approximate reading level, understanding of basic math concepts and symbols, general thinking/reasoning ability, eye-hand…

  1. Family Planning and Child Survival: The Role of Reproductive Factors in Infant and Child Mortality.

    ERIC Educational Resources Information Center

    Conly, Shanti R.

    This report summarizes the evidence that family planning can reduce deaths of children under 5 years of age at a reasonable cost. The report also: (1) identifies the major reproductive factors associated with child mortality; (2) estimates the approximate reduction in child mortality that could be achieved through improved childbearing patterns;…

  2. What the West Can Learn from Islam

    ERIC Educational Resources Information Center

    Ramadan, Tariq

    2007-01-01

    In this article, the author talks about the situation of Muslims in Western countries such as the United States. Here, the author talks about his experience wherein his visa was being revoked in late July 2004. The reason is that he made donations totaling approximately $900 to a Swiss Palestinian-support group that is now on the American…

  3. An Investigation of Harvard Dropouts. Final Report.

    ERIC Educational Resources Information Center

    Nicholi, Armand M., II

    Approximately half of the 7,000,000 students currently enrolled in college will fail to complete their education. This study investigates the causes of this high attrition rate by examining the records of 1,454 undergraduates who dropped out of Harvard College for various reasons over a 5-year period. Sources of the data were: (1) Registrar's…

  4. Establishing an Online Community of Inquiry at the Distance Education Centre, Victoria

    ERIC Educational Resources Information Center

    Jackson, Luke C.; Jackson, Alun C.; Chambers, Dianne

    2013-01-01

    This pilot intervention focused on three courses that were redesigned to utilize the online environment to establish an online community of inquiry (CoI). The setting for this research study was the Distance Education Centre, Victoria (DECV), an Australian co-educational school with approximately 3000 students who, for a variety of reasons, are…

  5. Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 1

    NASA Technical Reports Server (NTRS)

    Culbert, Christopher J. (Editor)

    1993-01-01

    Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake. The workshop was held June 1-3, 1992 at the Lyndon B. Johnson Space Center in Houston, Texas. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control, and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making.

  6. Saturn's Rings, the Yarkovsky Effects, and the Ring of Fire

    NASA Technical Reports Server (NTRS)

    Rubincam, David Parry

    2004-01-01

    The dimensions of Saturn's A and B rings may be determined by the seasonal Yarkovsky effect and the Yarkovsky-Schach effect; the two effects confine the rings between approximately 1.68 and approximately 2.23 Saturn radii, in reasonable agreement with the observed values of 1.525 and 2.267. The C ring may be sparsely populated because its particles are transients on their way to Saturn; the infall may create a luminous Ring of Fire around Saturn's equator. The ring system may be young: in the past heat flow from Saturn's interior much above its present value would not permit rings to exist.

  7. Accurate and Efficient Approximation to the Optimized Effective Potential for Exchange

    NASA Astrophysics Data System (ADS)

    Ryabinkin, Ilya G.; Kananenka, Alexei A.; Staroverov, Viktor N.

    2013-07-01

    We devise an efficient practical method for computing the Kohn-Sham exchange-correlation potential corresponding to a Hartree-Fock electron density. This potential is almost indistinguishable from the exact-exchange optimized effective potential (OEP) and, when used as an approximation to the OEP, is vastly better than all existing models. Using our method one can obtain unambiguous, nearly exact OEPs for any reasonable finite one-electron basis set at the same low cost as the Krieger-Li-Iafrate and Becke-Johnson potentials. For all practical purposes, this solves the long-standing problem of black-box construction of OEPs in exact-exchange calculations.

  8. Multimodal far-field acoustic radiation pattern: An approximate equation

    NASA Technical Reports Server (NTRS)

    Rice, E. J.

    1977-01-01

    The far-field sound radiation theory for a circular duct was studied for both single mode and multimodal inputs. The investigation was intended to develop a method to determine the acoustic power produced by turbofans as a function of mode cut-off ratio. With reasonable simplifying assumptions the single mode radiation pattern was shown to be reducible to a function of mode cut-off ratio only. With modal cut-off ratio as the dominant variable, multimodal radiation patterns can be reduced to a simple explicit expression. This approximate expression provides excellent agreement with an exact calculation of the sound radiation pattern using equal acoustic power per mode.

  9. Stress generation in thermally grown oxide films. [oxide scale spalling from superalloy substrates

    NASA Technical Reports Server (NTRS)

    Kumnick, A. J.; Ebert, L. J.

    1981-01-01

    A three dimensional finite element analysis was conducted, using the ANSYS computer program, of the stress state in a thin oxide film thermally formed on a rectangular piece of NiCrAl alloy. The analytical results indicate a very high compressive stress in the lateral directions of the film (approximately 6200 MPa), and tensile stresses in the metal substrate that ranged from essentially zero to about 55 MPa. It was found further that the intensity of the analytically determined average stresses could be approximated reasonably well by the modification of an equation developed previously by Oxx for stresses induced into bodies by thermal gradients.

  10. Parallel implementation of approximate atomistic models of the AMOEBA polarizable model

    NASA Astrophysics Data System (ADS)

    Demerdash, Omar; Head-Gordon, Teresa

    2016-11-01

    In this work we present a replicated data hybrid OpenMP/MPI implementation of a hierarchical progression of approximate classical polarizable models that yields speedups of up to ∼10 compared to the standard OpenMP implementation of the exact parent AMOEBA polarizable model. In addition, our parallel implementation exhibits reasonable weak and strong scaling. The resulting parallel software will prove useful for those who are interested in how molecular properties converge in the condensed phase with respect to the MBE, it provides a fruitful test bed for exploring different electrostatic embedding schemes, and offers an interesting possibility for future exascale computing paradigms.

  11. Differential cross sections for electron capture in p + H2 collisions

    NASA Astrophysics Data System (ADS)

    Igarashi, Akinori; Gulyás, Laszlo; Ohsaki, Akihiko

    2017-11-01

    Projectile angular distributions for electron capture in p + H2 collisions at 25 and 75 keV impact energies, measured by Sharma et al. [Phys. Rev. A 86, 022706 (2012)], are calculated using the CDW-EIS and eikonal approximations. Angular distributions evaluated in the CDW-EIS approximation are in good agreement with the experimental data measured for coherent projectile beams. Incoherent projectile scatterings are also considered by folding the coherent angular distributions over the transverse momentum distribution of the projectile wave-packet. Reasonable agreements with the measurements are obtained only with coherence parameters very different from those reported in the experiments.

  12. Propagating Qualitative Values Through Quantitative Equations

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak

    1992-01-01

    In most practical problems where traditional numeric simulation is not adequate, one need to reason about a system with both qualitative and quantitative equations. In this paper, we address the problem of propagating qualitative values represented as interval values through quantitative equations. Previous research has produced exponential-time algorithms for approximate solution of the problem. These may not meet the stringent requirements of many real time applications. This paper advances the state of art by producing a linear-time algorithm that can propagate a qualitative value through a class of complex quantitative equations exactly and through arbitrary algebraic expressions approximately. The algorithm was found applicable to Space Shuttle Reaction Control System model.

  13. Relational Reasoning about Numbers and Operations--Foundation for Calculation Strategy Use in Multi-Digit Multiplication and Division

    ERIC Educational Resources Information Center

    Schulz, Andreas

    2018-01-01

    Theoretical analysis of whole number-based calculation strategies and digit-based algorithms for multi-digit multiplication and division reveals that strategy use includes two kinds of reasoning: reasoning about the relations between numbers and reasoning about the relations between operations. In contrast, algorithms aim to reduce the necessary…

  14. On Nash-Equilibria of Approximation-Stable Games

    NASA Astrophysics Data System (ADS)

    Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh

    One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.

  15. The use of multiple models in case-based diagnosis

    NASA Technical Reports Server (NTRS)

    Karamouzis, Stamos T.; Feyock, Stefan

    1993-01-01

    The work described in this paper has as its goal the integration of a number of reasoning techniques into a unified intelligent information system that will aid flight crews with malfunction diagnosis and prognostication. One of these approaches involves using the extensive archive of information contained in aircraft accident reports along with various models of the aircraft as the basis for case-based reasoning about malfunctions. Case-based reasoning draws conclusions on the basis of similarities between the present situation and prior experience. We maintain that the ability of a CBR program to reason about physical systems is significantly enhanced by the addition to the CBR program of various models. This paper describes the diagnostic concepts implemented in a prototypical case based reasoner that operates in the domain of in-flight fault diagnosis, the various models used in conjunction with the reasoner's CBR component, and results from a preliminary evaluation.

  16. E-Beam Capture Aid Drawing Based Modelling on Cell Biology

    NASA Astrophysics Data System (ADS)

    Hidayat, T.; Rahmat, A.; Redjeki, S.; Rahman, T.

    2017-09-01

    The objectives of this research are to find out how far Drawing-based Modeling assisted with E-Beam Capture could support student’s scientific reasoning skill using Drawing - based Modeling approach assisted with E-Beam Capture. The research design that is used for this research is the Pre-test and Post-test Design. The data collection of scientific reasoning skills is collected by giving multiple choice questions before and after the lesson. The data analysis of scientific reasoning skills is using scientific reasoning assessment rubric. The results show an improvement of student’s scientific reasoning in every indicator; an improvement in generativity which shows 2 students achieving high scores, 3 students in elaboration reasoning, 4 students in justification, 3 students in explanation, 3 students in logic coherency, 2 students in synthesis. The research result in student’s explanation reasoning has the highest number of students with high scores, which shows 20 students with high scores in the pre-test and 23 students in post-test and synthesis reasoning shows the lowest number, which shows 1 student in the pretest and 3 students in posttest. The research result gives the conclusion that Drawing-based Modeling approach assisted with E-Beam Capture could not yet support student’s scientific reasoning skills comprehensively.

  17. Reasons Why Post-Trial Access to Trial Drugs Should, or Need not be Ensured to Research Participants: A Systematic Review

    PubMed Central

    Sofaer, Neema; Strech, Daniel

    2011-01-01

    Background: researchers and sponsors increasingly confront the issue of whether participants in a clinical trial should have post-trial access (PTA) to the trial drug. Legislation and guidelines are inconsistent, ambiguous or silent about many aspects of PTA. Recent research highlights the potential importance of systematic reviews (SRs) of reason-based literatures in informing decision-making in medicine, medical research and health policy. Purpose: to systematically review reasons why drug trial participants should, or need not be ensured PTA to the trial drug and the uses of such reasons. Data sources: databases in science/medicine, law and ethics, thesis databases, bibliographies, research ethics books and included publications’ notes/bibliographies. Publication selection: a publication was included if it included a reason as above. See article for detailed inclusion conditions. Data extraction and analysis: two reviewers extracted and analyzed data on publications and reasons. Results: of 2060 publications identified, 75 were included. These mentioned reasons based on morality, legality, interests/incentives, or practicality, comprising 36 broad (235 narrow) types of reason. None of the included publications, which included informal reviews and reports by official bodies, mentioned more than 22 broad (59 narrow) types. For many reasons, publications differed about the reason’s interpretation, implications and/or persuasiveness. Publications differed also regarding costs, feasibility and legality of PTA. Limitations: reason types could be applied differently. The quality of reasons was not measured. Conclusion: this review captured a greater variety of reasons and of their uses than any included publication. Decisions based on informal reviews or sub-sets of literature are likely to be biased. Research is needed on PTA ethics, costs, feasibility and legality and on assessing the quality of reason-based literature. PMID:21754950

  18. Semantics-based plausible reasoning to extend the knowledge coverage of medical knowledge bases for improved clinical decision support.

    PubMed

    Mohammadhassanzadeh, Hossein; Van Woensel, William; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2017-01-01

    Capturing complete medical knowledge is challenging-often due to incomplete patient Electronic Health Records (EHR), but also because of valuable, tacit medical knowledge hidden away in physicians' experiences. To extend the coverage of incomplete medical knowledge-based systems beyond their deductive closure, and thus enhance their decision-support capabilities, we argue that innovative, multi-strategy reasoning approaches should be applied. In particular, plausible reasoning mechanisms apply patterns from human thought processes, such as generalization, similarity and interpolation, based on attributional, hierarchical, and relational knowledge. Plausible reasoning mechanisms include inductive reasoning , which generalizes the commonalities among the data to induce new rules, and analogical reasoning , which is guided by data similarities to infer new facts. By further leveraging rich, biomedical Semantic Web ontologies to represent medical knowledge, both known and tentative, we increase the accuracy and expressivity of plausible reasoning, and cope with issues such as data heterogeneity, inconsistency and interoperability. In this paper, we present a Semantic Web-based, multi-strategy reasoning approach, which integrates deductive and plausible reasoning and exploits Semantic Web technology to solve complex clinical decision support queries. We evaluated our system using a real-world medical dataset of patients with hepatitis, from which we randomly removed different percentages of data (5%, 10%, 15%, and 20%) to reflect scenarios with increasing amounts of incomplete medical knowledge. To increase the reliability of the results, we generated 5 independent datasets for each percentage of missing values, which resulted in 20 experimental datasets (in addition to the original dataset). The results show that plausibly inferred knowledge extends the coverage of the knowledge base by, on average, 2%, 7%, 12%, and 16% for datasets with, respectively, 5%, 10%, 15%, and 20% of missing values. This expansion in the KB coverage allowed solving complex disease diagnostic queries that were previously unresolvable, without losing the correctness of the answers. However, compared to deductive reasoning, data-intensive plausible reasoning mechanisms yield a significant performance overhead. We observed that plausible reasoning approaches, by generating tentative inferences and leveraging domain knowledge of experts, allow us to extend the coverage of medical knowledge bases, resulting in improved clinical decision support. Second, by leveraging OWL ontological knowledge, we are able to increase the expressivity and accuracy of plausible reasoning methods. Third, our approach is applicable to clinical decision support systems for a range of chronic diseases.

  19. Using AberOWL for fast and scalable reasoning over BioPortal ontologies.

    PubMed

    Slater, Luke; Gkoutos, Georgios V; Schofield, Paul N; Hoehndorf, Robert

    2016-08-08

    Reasoning over biomedical ontologies using their OWL semantics has traditionally been a challenging task due to the high theoretical complexity of OWL-based automated reasoning. As a consequence, ontology repositories, as well as most other tools utilizing ontologies, either provide access to ontologies without use of automated reasoning, or limit the number of ontologies for which automated reasoning-based access is provided. We apply the AberOWL infrastructure to provide automated reasoning-based access to all accessible and consistent ontologies in BioPortal (368 ontologies). We perform an extensive performance evaluation to determine query times, both for queries of different complexity and for queries that are performed in parallel over the ontologies. We demonstrate that, with the exception of a few ontologies, even complex and parallel queries can now be answered in milliseconds, therefore allowing automated reasoning to be used on a large scale, to run in parallel, and with rapid response times.

  20. Neural correlates of post-conventional moral reasoning: a voxel-based morphometry study.

    PubMed

    Prehn, Kristin; Korczykowski, Marc; Rao, Hengyi; Fang, Zhuo; Detre, John A; Robertson, Diana C

    2015-01-01

    Going back to Kohlberg, moral development research affirms that people progress through different stages of moral reasoning as cognitive abilities mature. Individuals at a lower level of moral reasoning judge moral issues mainly based on self-interest (personal interests schema) or based on adherence to laws and rules (maintaining norms schema), whereas individuals at the post-conventional level judge moral issues based on deeper principles and shared ideals. However, the extent to which moral development is reflected in structural brain architecture remains unknown. To investigate this question, we used voxel-based morphometry and examined the brain structure in a sample of 67 Master of Business Administration (MBA) students. Subjects completed the Defining Issues Test (DIT-2) which measures moral development in terms of cognitive schema preference. Results demonstrate that subjects at the post-conventional level of moral reasoning were characterized by increased gray matter volume in the ventromedial prefrontal cortex and subgenual anterior cingulate cortex, compared with subjects at a lower level of moral reasoning. Our findings support an important role for both cognitive and emotional processes in moral reasoning and provide first evidence for individual differences in brain structure according to the stages of moral reasoning first proposed by Kohlberg decades ago.

  1. Clinical reasoning of Filipino physical therapists: Experiences in a developing nation.

    PubMed

    Rotor, Esmerita R; Capio, Catherine M

    2018-03-01

    Clinical reasoning is essential for physical therapists to engage in the process of client care, and has been known to contribute to professional development. The literature on clinical reasoning and experiences have been based on studies from Western and developed nations, from which multiple influencing factors have been found. A developing nation, the Philippines, has distinct social, economic, political, and cultural circumstances. Using a phenomenological approach, this study explored experiences of Filipino physical therapists on clinical reasoning. Ten therapists working in three settings: 1) hospital; 2) outpatient clinic; and 3) home health were interviewed. Major findings were: a prescription-based referral system limited clinical reasoning; procedural reasoning was a commonly experienced strategy while diagnostic and predictive reasoning were limited; factors that influenced clinical reasoning included practice setting and the professional relationship with the referring physician. Physical therapists' responses suggested a lack of autonomy in practice that appeared to stifle clinical reasoning. Based on our findings, we recommend that the current regulations governing PT practice in the Philippines may be updated, and encourage educators to strengthen teaching approaches and strategies that support clinical reasoning. These recommendations are consistent with the global trend toward autonomous practice.

  2. The rate of radon remediation in Ireland 2011-2015: Establishing a base line rate for Ireland's National Radon Control Strategy.

    PubMed

    Dowdall, A; Fenton, D; Rafferty, B

    2016-10-01

    Radon is the greatest source of radiation exposure to the public. In Ireland, it is estimated that approximately 7% of the national housing stock have radon concentrations above the Reference Level of 200 Bq m -3 . A radon test can be carried out to identify homes with radon levels above the Reference Level. However there is no health benefit associated with radon testing unless it leads to remediation. Surveys to establish the rate of remediation in Ireland, that is the proportion of householders who having found levels of radon above the Reference Level proceed to carry out remediation work have been carried out in 2011 and 2013. Reasons for not carrying out remediation work were also investigated. In 2015 the survey was repeated to establish the current rate of remediation and reasons for not remediating. This report presents the results of that survey. It also compiles the data from all three surveys to identify any trends over time. The rate of remediation is an important parameter in estimating the effectiveness of programmes aimed at reducing radon levels. Currently the rate of remediation is 22% and the main reasons householders gave for not remediating were not certain there is a serious risk and concern about the cost of the work. In Ireland, this figure of 22% will be now used as a baseline metric against which the effectiveness of its National Radon Control Strategy will be measured over time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Deductive reasoning, brain maturation, and science concept acquisition: Are they linked?

    NASA Astrophysics Data System (ADS)

    Lawson, Anton E.

    The present study tested the alternative hypotheses that the poor performance of the intuitive and transitional students on the concept acquisition tasks employed in the Lawson et al. (1991) study was due either to their failure (a) to use deductive reasoning to test potentially relevant task features, as suggested by Lawson et al. (1991); (b) to identify potentially relevant features; or (c) to derive and test a successful problem-solving strategy. To test these hypotheses a training session, which consisted of a series of seven concept acquisition tasks, was designed to reveal to students key task features and the deductive reasoning pattern necessary to solve the tasks. The training was individually administered to students (ages 5-14 years). Results revealed that none of the five- and six-year-olds, approximately half of the seven-year-olds, and virtually all of the students eight years and older responded successfully to the training. These results are viewed as contradictory to the hypothesis that the intuitive and transitional students in the Lawson et al. (1991) study lacked the reasoning skills necessary to identify and test potentially relevant task features. Instead, the results support the hypothesis that their poor performance was due to their failure to use hypothetico-deductive reasoning to derive an effective strategy. Previous research is cited that indicates that the brain's frontal lobes undergo a pronounced growth spurt from about four years of age to about seven years of age. In fact, the performance of normal six-year-olds and adults with frontal lobe damage on tasks such as the Wisconsin Card Sorting Task (WCST), a task similar in many ways to the present concept acquisition tasks, has been found to be identical. Consequently, the hypothesis is advanced that maturation of the frontal lobes can explain the striking improvement in performance at age seven. A neural network of the role of the frontal lobes in task performance based upon the work of Levine and Prueitt (1989) is presented. The advance in reasoning that presumably results from effective operation of the frontal lobes is seen as a fundamental advance in intellectual development because it enables children to employ an inductive-deductive reasoning pattern to change their minds when confronted with contradictory evidence regarding features of perceptible objects, a skill necessary for descriptive concept acquisition. It is suggested that a further qualitative advance in intellectual development occurs when an analogous pattern of abductive-deductive reasoning is applied to hypothetical objects and/or processes to allow for alternative hypothesis testing and theoretical concept acquisition. Apparently this is the reasoning pattern needed to derive an effective problem-solving strategy to solve the concept acquisition tasks of Lawson et al. (1991) when direct instruction is not provided. Implications for the science classroom are suggested.

  4. How Uncertain is Uncertainty?

    NASA Astrophysics Data System (ADS)

    Vámos, Tibor

    The gist of the paper is the fundamental uncertain nature of all kinds of uncertainties and consequently a critical epistemic review of historical and recent approaches, computational methods, algorithms. The review follows the development of the notion from the beginnings of thinking, via the Aristotelian and Skeptic view, the medieval nominalism and the influential pioneering metaphors of ancient India and Persia to the birth of modern mathematical disciplinary reasoning. Discussing the models of uncertainty, e.g. the statistical, other physical and psychological background we reach a pragmatic model related estimation perspective, a balanced application orientation for different problem areas. Data mining, game theories and recent advances in approximation algorithms are discussed in this spirit of modest reasoning.

  5. Non-Hodgkin’s Lymphomas, Version 4.2014

    PubMed Central

    Zelenetz, Andrew D.; Gordon, Leo I.; Wierda, William G.; Abramson, Jeremy S.; Advani, Ranjana H.; Andreadis, C. Babis; Bartlett, Nancy; Byrd, John C.; Czuczman, Myron S.; Fayad, Luis E.; Fisher, Richard I.; Glenn, Martha J.; Harris, Nancy Lee; Hoppe, Richard T.; Horwitz, Steven M.; Kelsey, Christopher R.; Kim, Youn H.; Krivacic, Susan; LaCasce, Ann S.; Nademanee, Auayporn; Porcu, Pierluigi; Press, Oliver; Rabinovitch, Rachel; Reddy, Nishitha; Reid, Erin; Saad, Ayman A.; Sokol, Lubomir; Swinnen, Lode J.; Tsien, Christina; Vose, Julie M.; Yahalom, Joachim; Zafar, Nadeem; Dwyer, Mary; Sundar, Hema

    2016-01-01

    Non-Hodgkin’s lymphomas (NHL) are a heterogeneous group of lymphoproliferative disorders originating in B lymphocytes, T lymphocytes, or natural killer cells. Mantle cell lymphoma (MCL) accounts for approximately 6% of all newly diagnosed NHL cases. Radiation therapy with or without systemic therapy is a reasonable approach for the few patients who present with early-stage disease. Rituximab-based chemoimmunotherapy followed by high-dose therapy and autologous stem cell rescue (HDT/ASCR) is recommended for patients presenting with advanced-stage disease. Induction therapy followed by rituximab maintenance may provide extended disease control for those who are not candidates for HDT/ASCR. Ibrutinib, a Bruton tyrosine kinase inhibitor, was recently approved for the treatment of relapsed or refractory disease. This manuscript discusses the recommendations outlined in the NCCN Guidelines for NHL regarding the diagnosis and management of patients with MCL. PMID:25190696

  6. Low-order modeling of internal heat transfer in biomass particle pyrolysis

    DOE PAGES

    Wiggins, Gavin M.; Daw, C. Stuart; Ciesielski, Peter N.

    2016-05-11

    We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. Here, we conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulatemore » biomass particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less

  7. Waste from grocery stores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lieb, K.

    1993-11-01

    The Community Recycling Center, Inc., (CRC, Champaign, Ill.), last year conducted a two-week audit of waste generated at two area grocery stores. The stores surveyed are part of a 10-store chain. For two of the Kirby Foods Stores, old corrugated containers (OCC) accounted for 39-45% of all waste. The summary drew correlations between the amount of OCC and the sum of food and garbage waste. The study suggested that one can reasonably estimate volumes of waste based on the amount of OCC because most things come in a box. Auditors set up a series of containers to make the collectionmore » process straightforward. Every day the containers were taken to local recycling centers and weighed. Approximate waste breakdowns for the two stores were as follows: 45% OCC; 35% food waste; 20% nonrecyclable or noncompostable items; and 10% other.« less

  8. Hybrids of Nucleic Acids and Carbon Nanotubes for Nanobiotechnology

    PubMed Central

    Umemura, Kazuo

    2015-01-01

    Recent progress in the combination of nucleic acids and carbon nanotubes (CNTs) has been briefly reviewed here. Since discovering the hybridization phenomenon of DNA molecules and CNTs in 2003, a large amount of fundamental and applied research has been carried out. Among thousands of papers published since 2003, approximately 240 papers focused on biological applications were selected and categorized based on the types of nucleic acids used, but not the types of CNTs. This survey revealed that the hybridization phenomenon is strongly affected by various factors, such as DNA sequences, and for this reason, fundamental studies on the hybridization phenomenon are important. Additionally, many research groups have proposed numerous practical applications, such as nanobiosensors. The goal of this review is to provide perspective on biological applications using hybrids of nucleic acids and CNTs. PMID:28347014

  9. Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory

    DOE PAGES

    Liu, C.; Liu, J.; Yao, Y. X.; ...

    2017-01-16

    Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less

  10. Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C.; Liu, J.; Yao, Y. X.

    Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less

  11. Optical phonon modes and polaron related parameters in GaxIn1-xP

    NASA Astrophysics Data System (ADS)

    Bouarissa, N.; Algarni, H.; Al-Hagan, O. A.; Khan, M. A.; Alhuwaymel, T. F.

    2018-02-01

    Based on a pseudopotential approach under the virtual crystal approximation that includes the effect of compositional disorder, the optical lattice vibration frequencies and polaron related parameters in zinc-blende GaxIn1-xP have been studied. Our findings showed generally reasonably good accord with data in the literature. Other case, our results are predictions. The composition dependence of longitudinal optical (LO) and transverse optical (TO) phonon modes, LO-TO splittings, Frӧhlich coupling parameter, Debye temperature of LO phonon frequency, and polaron effective mass has been analyzed and discussed. While a non-monotonic behavior has been noticed for the LO and TO phonon frequencies versus Ga concentration x, a monotonic behavior has been observed for the rest of the features of interest. The information derived from this investigation may be useful for optoelectronic technological applications.

  12. Attitude of Israeli mothers with vaccination of their daughters against human papilloma virus.

    PubMed

    Ben Natan, Merav; Aharon, Osnat; Palickshvili, Sharon; Gurman, Vicky

    2011-02-01

    The purpose of the study is to examine whether the model based on the Theory of Reasoned Action (TRA) succeeds in predicting mothers' intention to vaccinate their daughters against the human papilloma virus infection. Questionnaires were distributed among convenience sample of 103 mothers of daughters 18 years and younger. Approximately 65% of mothers intend to vaccinate their daughters. Behavioral beliefs, normative beliefs, and level of knowledge had a significant positive effect on mothers' intention to vaccinate their daughters. High levels of religiosity were found to negatively affect mothers' intention to vaccinate their daughters. The TRA combined with level of knowledge and level of religiosity succeeds in predicting mothers' behavioral intentions regarding vaccinating daughters. This indicates the significance of nurses' roles in imparting information and increasing awareness among mothers. Copyright © 2011. Published by Elsevier Inc.

  13. Formation of Minor Phases in a Nickel-Based Disk Superalloy

    NASA Technical Reports Server (NTRS)

    Gabb, T. P.; Garg, A.; Miller, D. R.; Sudbrack, C. K.; Hull, D. R.; Johnson, D.; Rogers, R. B.; Gayda, J.; Semiatin, S. L.

    2012-01-01

    The minor phases of powder metallurgy disk superalloy LSHR were studied. Samples were consistently heat treated at three different temperatures for long times to approximate equilibrium. Additional heat treatments were also performed for shorter times, to then assess non-equilibrium conditions. Minor phases including MC carbides, M23C6 carbides, M3B2 borides, and sigma were identified. Their transformation temperatures, lattice parameters, compositions, average sizes and total area fractions were determined, and compared to estimates of an existing phase prediction software package. Parameters measured at equilibrium sometimes agreed reasonably well with software model estimates, with potential for further improvements. Results for shorter times representing non-equilibrium indicated significant potential for further extension of the software to such conditions, which are more commonly observed during heat treatments and service at high temperatures for disk applications.

  14. Coulomb drag in electron-hole bilayer: Mass-asymmetry and exchange correlation effects

    NASA Astrophysics Data System (ADS)

    Arora, Priya; Singh, Gurvinder; Moudgil, R. K.

    2018-04-01

    Motivated by a recent experiment by Zheng et al. [App. Phys. Lett. 108, 062102 (2016)] on coulomb drag in electron-hole and hole-hole bilayers based on GaAs/AlGaAs semiconductor heterostructure, we investigate theoretically the influence of mass-asymmetry and temperature-dependence of correlations on the drag rate. The correlation effects are dealt with using the Vignale-Singwi effective inter-layer interaction model which includes correlations through local-field corrections to the bare coulomb interactions. However, in this work, we have incorporated only the intra-layer correlations using the temperature-dependent Hubbard approximation. Our results display a reasonably good agreement with the experimental data. However, it is crucial to include both the electron-hole mass-asymmetry and temperature-dependence of correlations. Mass-asymmetry and correlations are found to result in a substantial enhancement of drag resistivity.

  15. Low-Order Modeling of Internal Heat Transfer in Biomass Particle Pyrolysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiggins, Gavin M.; Ciesielski, Peter N.; Daw, C. Stuart

    2016-06-16

    We present a computationally efficient, one-dimensional simulation methodology for biomass particle heating under conditions typical of fast pyrolysis. Our methodology is based on identifying the rate limiting geometric and structural factors for conductive heat transport in biomass particle models with realistic morphology to develop low-order approximations that behave appropriately. Comparisons of transient temperature trends predicted by our one-dimensional method with three-dimensional simulations of woody biomass particles reveal good agreement, if the appropriate equivalent spherical diameter and bulk thermal properties are used. We conclude that, for particle sizes and heating regimes typical of fast pyrolysis, it is possible to simulate biomassmore » particle heating with reasonable accuracy and minimal computational overhead, even when variable size, aspherical shape, anisotropic conductivity, and complex, species-specific internal pore geometry are incorporated.« less

  16. Nozzle Free Jet Flows Within the Strong Curved Shock Regime

    NASA Technical Reports Server (NTRS)

    Shih, Tso-Shin

    1975-01-01

    A study based on inviscid analysis was conducted to examine the flow field produced from a convergent-divergent nozzle when a strong curved shock occurs. It was found that a certain constraint is imposed on the flow solution of the problem which is the unique feature of the flow within this flow regime, and provides the reason why the inverse method of calculation cannot be employed for these problems. An approximate method was developed to calculate the flow field, and results were obtained for two-dimensional flows. Analysis and calculations were performed for flows with axial symmetry. It is shown that under certain conditions, the vorticity generated at the jet boundary may become infinite and the viscous effect becomes important. Under other conditions, the asymptotic free jet height as well as the corresponding shock geometry were determined.

  17. Improving Critical Thinking Using a Web-Based Tutorial Environment.

    PubMed

    Wiesner, Stephen M; Walker, J D; Creeger, Craig R

    2017-01-01

    With a broad range of subject matter, students often struggle recognizing relationships between content in different subject areas. A scenario-based learning environment (SaBLE) has been developed to enhancing clinical reasoning and critical thinking among undergraduate students in a medical laboratory science program and help them integrate their new knowledge. SaBLE incorporates aspects of both cognitive theory and instructional design, including reduction of extraneous cognitive load, goal-based learning, feedback timing, and game theory. SaBLE is a website application that runs in most browsers and devices, and is used to develop randomly selected scenarios that challenge user thinking in almost any scenario-based instruction. User progress is recorded to allow comprehensive data analysis of changes in user performance. Participation is incentivized using a point system and digital badges or awards. SaBLE was deployed in one course with a total exposure for the treatment group of approximately 9 weeks. When assessing performance of SaBLE participants, and controlling for grade point average as a possible confounding variable, there was a statistically significant correlation between the number of SaBLE levels completed and performance on selected critical-thinking exam questions addressing unrelated content.

  18. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    PubMed

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  19. Transformation based endorsement systems

    NASA Technical Reports Server (NTRS)

    Sudkamp, Thomas

    1988-01-01

    Evidential reasoning techniques classically represent support for a hypothesis by a numeric value or an evidential interval. The combination of support is performed by an arithmetic rule which often requires restrictions to be placed on the set of possibilities. These assumptions usually require the hypotheses to be exhausitive and mutually exclusive. Endorsement based classification systems represent support for the alternatives symbolically rather than numerically. A framework for constructing endorsement systems is presented in which transformations are defined to generate and update the knowledge base. The interaction of the knowledge base and transformations produces a non-monotonic reasoning system. Two endorsement based reasoning systems are presented to demonstrate the flexibility of the transformational approach for reasoning with ambiguous and inconsistent information.

  20. Born approximation for scattering by evanescent waves: Comparison with exact scattering by an infinite fluid cylinder

    NASA Astrophysics Data System (ADS)

    Marston, Philip L.

    2004-05-01

    In some situations, evanescent waves can be an important component of the acoustic field within the sea bottom. For this reason (as well as to advance the understanding of scattering processes) it can be helpful to examine the modifications to scattering theory resulting from evanescence. Modifications to ray theory were examined in a prior approximation [P. L. Marston, J. Acoust. Soc. Am. 113, 2320 (2003)]. The new research concerns the modifications to the low-frequency Born approximation and confirmation by comparison with the exact two-dimensional scattering by a fluid cylinder. In the case of a circular cylinder having the same density as the surroundings but having a compressibility contrast with the surroundings, the Born approximation with a nonevanescent incident wave gives only monopole scattering. When the cylinder has a density contrast and the same compressibility as the surroundings the regular Born approximation gives only dipole scattering (with the dipole oriented along to the incident wavevector). In both cases when the Born approximation is modified to include the evanescence of the incident wave, an additional dipole scattering term is evident. In each case the new dipole is oriented along to the decay axis of the evanescent wave. [Research supported by ONR.

  1. Atmospheric oxygenation driven by unsteady growth of the continental sedimentary reservoir

    NASA Astrophysics Data System (ADS)

    Husson, Jon M.; Peters, Shanan E.

    2017-02-01

    Atmospheric oxygen concentration has increased over Earth history, from ∼0 before 2.5 billion years ago to its present-day concentration of 21%. The initial rise in pO2 approximately 2.3 billion years ago required oxygenic photosynthesis, but the evolution of this key metabolic pathway was not sufficient to propel atmospheric oxygen to modern levels, which were not sustained until approximately two billion years later. The protracted lag between the origin of oxygenic photosynthesis and abundant O2 in the surface environment has many implications for the evolution of animals, but the reasons for the delay remain unknown. Here we show that the history of sediment accumulation on continental crust covaries with the history of atmospheric oxygen concentration. A forward model based on the empirical record of net organic carbon burial and oxidative weathering of the crust predicts two significant rises in pO2 separated by three comparatively stable plateaus, a pattern that reproduces major biological transitions and proxy-based pO2 records. These results suggest that the two-phased oxygenation of Earth's surface environment, and the long delays between the origin of life, the evolution of metazoans, and their subsequent diversification during the Cambrian Explosion, was caused by step-wise shifts in the ability of the continents to accumulate and store sedimentary organic carbon. The geodynamic mechanisms that promote and inhibit sediment accumulation on continental crust have, therefore, exerted a first-order control on the evolution of Earth's life and environment.

  2. Informal reasoning regarding socioscientific issues: The influence of morality and content knowledge

    NASA Astrophysics Data System (ADS)

    Sadler, Troy Dow

    This study focused on informal reasoning regarding socioscientific issues. It explored how morality and content knowledge influenced the negotiation and resolution of contentious and complex scenarios based on genetic engineering. Two hundred and sixty-nine undergraduate students completed a quantitative test of genetics concepts. A sub-set of the students (n = 30) who completed this instrument and represented divergent levels of content knowledge participated in two individual interviews, during which they discussed their ideas, reactions, and solutions to three gene therapy scenarios and three cloning scenarios. A mixed-methods approach was used to examine patterns of informal reasoning and the influence of morality, the effects of content knowledge on the use of informal reasoning patterns, and the effects of content knowledge on the quality of informal reasoning. Students demonstrated evidence of rationalistic, emotive, and intuitive forms of informal reasoning. Rationalistic informal reasoning described reason-based considerations; emotive informal reasoning described care-based considerations; and intuitive reasoning described considerations based on immediate reactions to the context of a scenario. Participants frequently relied on combinations of these reasoning patterns as they worked to resolve individual socioscientific scenarios. Most of the participants appreciated at least some of the moral implications of their decisions, and these considerations were typically interwoven within an overall pattern of informal reasoning. Although differences in content knowledge were not found to be related to modes of informal reasoning (rationalistic, emotive, and informal), data did indicate that differences in content knowledge were related to variations in informal reasoning quality. Participants, with more advanced understandings of genetics, demonstrated fewer instances of reasoning flaws, as defined by a priori criteria (intra-scenario coherence, inter-scenario non-contradiction, counter position construction, and rebuttal construction) and were more likely to incorporate content knowledge in their reasoning patterns than participants with more naive understandings of genetics. These results highlight the need to ensure that science classrooms are environments in which intuition and emotion in addition to reason are valued. In addition, the findings underscore the need for teachers to consider students' content knowledge when determining the appropriateness of socioscientific curricula. Implications and recommendations for future research are discussed.

  3. Emotional reasoning and parent-based reasoning in non-clinical children, and their prospective relationships with anxiety symptoms.

    PubMed

    Morren, Mattijn; Muris, Peter; Kindt, Merel; Schouten, Erik; van den Hout, Marcel

    2008-12-01

    Emotional and parent-based reasoning refer to the tendency to rely on personal or parental anxiety response information rather than on objective danger information when estimating the dangerousness of a situation. This study investigated the prospective relationships of emotional and parent-based reasoning with anxiety symptoms in a sample of non-clinical children aged 8-14 years (n = 122). Children completed the anxiety subscales of the Revised Children's Anxiety and Depression Scale (Muris et al. Clin Psychol Psychother 9:430-442, 2002) and provided danger ratings of scenarios that systematically combined objective danger and objective safety information with anxiety-response and positive-response information. These measurements were repeated 10 months later (range 8-11 months). Emotional and parent-based reasoning effects emerged on both occasions. In addition, both effects were modestly stable, but only in case of objective safety. Evidence was found that initial anxiety levels were positively related to emotional reasoning 10 months later. In addition, initial levels of emotional reasoning were positively related to anxiety at a later time, but only when age was taken into account. That is, this relationship changed with increasing age from positive to negative. No significant prospective relationships emerged between anxiety and parent-based reasoning. As yet the clinical implications of these findings are limited, although preliminary evidence indicates that interpretation bias can be modified to decrease anxiety.

  4. Impairment of sexual activity in middle-aged women in Chile.

    PubMed

    Blümel, Juan Enrique; Castelo-Branco, Camil; Cancelo, María Jesús; Romero, Hernán; Aprikian, Daniel; Sarrá, Salvador

    2004-01-01

    It has been suggested that approximately 40% of women between 40 and 64 years of age cease their sexual activity. Our objective was to examine the reasons that sexual activity has stopped and to determine the effect that this behavior has on the marital stability of those middle-aged women. A total of 534 healthy women between 40 and 64 years of age who were attending the Southern Metropolitan Health Service in Santiago, Chile, were asked to take part in the study. The main reasons for sexual inactivity in middle-aged women were sexual dysfunction (49.2%), unpleasant personal relationship with a partner (17.9%), and lack of a partner (17.7%). These reasons vary with aging; in women younger than 45 years, the most frequent reason was erectile dysfunction (40.7%); in those between 45 and 59, low sexual desire (40.5%); and, in women older than 60 years, the lack of a partner (32.4%). Sexual inactivity did not affect marital stability because women without sexual relationships (68.2% of the entire sample) were married. Among the divorced women, female sexual dysfunction was responsible for only 11.7% of the separations. Low sexual desire is the main reason for ceasing sexual activity. Nevertheless, stopping sexual relationships does not seem to be important in marital stability.

  5. Fuzzy logic and neural networks in artificial intelligence and pattern recognition

    NASA Astrophysics Data System (ADS)

    Sanchez, Elie

    1991-10-01

    With the use of fuzzy logic techniques, neural computing can be integrated in symbolic reasoning to solve complex real world problems. In fact, artificial neural networks, expert systems, and fuzzy logic systems, in the context of approximate reasoning, share common features and techniques. A model of Fuzzy Connectionist Expert System is introduced, in which an artificial neural network is designed to construct the knowledge base of an expert system from, training examples (this model can also be used for specifications of rules in fuzzy logic control). Two types of weights are associated with the synaptic connections in an AND-OR structure: primary linguistic weights, interpreted as labels of fuzzy sets, and secondary numerical weights. Cell activation is computed through min-max fuzzy equations of the weights. Learning consists in finding the (numerical) weights and the network topology. This feedforward network is described and first illustrated in a biomedical application (medical diagnosis assistance from inflammatory-syndromes/proteins profiles). Then, it is shown how this methodology can be utilized for handwritten pattern recognition (characters play the role of diagnoses): in a fuzzy neuron describing a number for example, the linguistic weights represent fuzzy sets on cross-detecting lines and the numerical weights reflect the importance (or weakness) of connections between cross-detecting lines and characters.

  6. Comparative Study on Prediction Effects of Short Fatigue Crack Propagation Rate by Two Different Calculation Methods

    NASA Astrophysics Data System (ADS)

    Yang, Bing; Liao, Zhen; Qin, Yahang; Wu, Yayun; Liang, Sai; Xiao, Shoune; Yang, Guangwu; Zhu, Tao

    2017-05-01

    To describe the complicated nonlinear process of the fatigue short crack evolution behavior, especially the change of the crack propagation rate, two different calculation methods are applied. The dominant effective short fatigue crack propagation rates are calculated based on the replica fatigue short crack test with nine smooth funnel-shaped specimens and the observation of the replica films according to the effective short fatigue cracks principle. Due to the fast decay and the nonlinear approximation ability of wavelet analysis, the self-learning ability of neural network, and the macroscopic searching and global optimization of genetic algorithm, the genetic wavelet neural network can reflect the implicit complex nonlinear relationship when considering multi-influencing factors synthetically. The effective short fatigue cracks and the dominant effective short fatigue crack are simulated and compared by the Genetic Wavelet Neural Network. The simulation results show that Genetic Wavelet Neural Network is a rational and available method for studying the evolution behavior of fatigue short crack propagation rate. Meanwhile, a traditional data fitting method for a short crack growth model is also utilized for fitting the test data. It is reasonable and applicable for predicting the growth rate. Finally, the reason for the difference between the prediction effects by these two methods is interpreted.

  7. Rheumatic Heart Disease Prophylaxis in Older Patients: A Register-Based Audit of Adherence to Guidelines

    PubMed Central

    Holland, James V; Hardie, Kate; de Dassel, Jessica; Ralph, Anna P

    2018-01-01

    Abstract Background Prevention of rheumatic heart disease (RHD) remains challenging in high-burden settings globally. After acute rheumatic fever (ARF), secondary antibiotic prophylaxis is required to prevent RHD. International guidelines on recommended durations of secondary prophylaxis differ, with scope for clinician discretion. Because ARF risk decreases with age, ongoing prophylaxis is generally considered unnecessary beyond approximately the third decade. Concordance with guidelines on timely cessation of prophylaxis is unknown. Methods We undertook a register-based audit to determine the appropriateness of antibiotic prophylaxis among clients aged ≥35 years in Australia’s Northern Territory. Data on demographics, ARF episode(s), RHD severity, prophylaxis type, and relevant clinical notes were extracted. The determination of guideline concordance was based on whether (1) national guidelines were followed; (2) a reason for departure from guidelines was documented; (3) lifelong continuation was considered appropriate in all cases of severe RHD. Results We identified 343 clients aged ≥35 years prescribed secondary prophylaxis. Guideline concordance was 39% according to national guidelines, 68% when documented reasons for departures from guidelines were included and 82% if patients with severe RHD were deemed to need lifelong prophylaxis. Shorter times since last echocardiogram or cardiologist review were associated with greater likelihood of guideline concordance (P < .001). The median time since last ARF was 5.9 years in the guideline-concordant group and 24.0 years in the nonconcordant group (P < .001). Thirty-two people had an ARF episode after age 40 years. Conclusions In this setting, appropriate discontinuation of RHD prophylaxis could be improved through timely specialist review to reduce unnecessary burden on clients and health systems.

  8. Model-Based Reasoning in Humans Becomes Automatic with Training.

    PubMed

    Economides, Marcos; Kurth-Nelson, Zeb; Lübbert, Annika; Guitart-Masip, Marc; Dolan, Raymond J

    2015-09-01

    Model-based and model-free reinforcement learning (RL) have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  9. Checklist of Major Plant Species in Ashley County, Arkansas Noted by General Land Office Surveyors

    Treesearch

    Don C. Bragg

    2002-01-01

    The original General Land Office (GLO) survey notes for the Ashley County, Arkansas, area were examined to determine the plant taxa mentioned during the 1818 to 1855 surveys. While some challenges in identifying species were encountered, at least 39 families and approximately 100 species were identified with reasonable certainty. Most references were for trees used to...

  10. Bee diversity associated with Limnanthes floral patches in California vernal pool habitats

    Treesearch

    Joan M. Leong; Robbin W. Thorp

    2005-01-01

    As with other kinds of wetland habitats in California, approximately 90 percent of vernal pool habitat (estimated) has been lost in California. In southern California, losses are estimated to be even greater. The flora of these endangered habitats is reasonably well known, especially the spring flowering annuals that are found in or at the margins of vernal pools (for...

  11. Relaciones Culturales de Mexico: Convenios de Intercambio Cultural y Asistencia Tecnica (Mexican Cultural Relations: Cultural Exchange and Technical Assistance Agreements).

    ERIC Educational Resources Information Center

    n10 p43-83, 1971

    1971-01-01

    This document is an English-language abstract (approximately 1500 words) describing briefly Mexico's cultural relations with 23 nations with which she has cultural exchange agreements. The reasons for cultural exchange are stated, such as the belief that cultural relations promote good relations among nations. The agreements concluded between…

  12. Feedback in Software and a Desktop Manufacturing Context for Learning Estimation Strategies in Middle School

    ERIC Educational Resources Information Center

    Malcolm, Peter

    2013-01-01

    The ability and to make good estimates is essential, as is the ability to assess the reasonableness of estimates. These abilities are becoming increasingly important as digital technologies transform the ways in which people work. To estimate is to provide an approximation to a problem that is mathematical in nature, and the ability to estimate is…

  13. 76 FR 76112 - Approval and Promulgation of Implementation Plans, State of California, San Joaquin Valley...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-06

    ... particular nights are not necessarily insignificant from the standpoint of PM 10 and PM 2.5 formation... winter nights to provide frost protection for certain type of crops (like citrus) when temperatures are... reasonably be estimated at approximately 15 pounds per day of NO X .\\1\\ \\1\\ Most engines are fired on propane...

  14. Determination of transverse elastic constants of wood using a cylindrically orthotropic model

    Treesearch

    John C. Hermanson

    2003-01-01

    The arrangement of anatomical elements in the cross section of a tree can be characterized, at least to a first approximation, with a cylindrical coordinate system. It seems reasonable that the physical properties of wood in the transverse plane, therefore, would exhibit behaviour that is associated with this anatomical alignment. Most of the transverse properties of...

  15. A District View: Dropouts and the Differentiated Diploma

    ERIC Educational Resources Information Center

    Holden, E. Todd

    2012-01-01

    More students are deciding to dropout of school prior to graduation. As a result the dropout rate has become a hot topic in education across the United States. The average high school dropout salary is approximately 50% less than the salary of a high school graduate. The social factors are another reason the dropout rate needs to be a high…

  16. Validity of the Aluminum Equivalent Approximation in Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; Adams, Daniel O.; Wilson, John W.

    2009-01-01

    The origin of the aluminum equivalent shield approximation in space radiation analysis can be traced back to its roots in the early years of the NASA space programs (Mercury, Gemini and Apollo) wherein the primary radiobiological concern was the intense sources of ionizing radiation causing short term effects which was thought to jeopardize the safety of the crew and hence the mission. Herein, it is shown that the aluminum equivalent shield approximation, although reasonably well suited for that time period and to the application for which it was developed, is of questionable usefulness to the radiobiological concerns of routine space operations of the 21 st century which will include long stays onboard the International Space Station (ISS) and perhaps the moon. This is especially true for a risk based protection system, as appears imminent for deep space exploration where the long-term effects of Galactic Cosmic Ray (GCR) exposure is of primary concern. The present analysis demonstrates that sufficiently large errors in the interior particle environment of a spacecraft result from the use of the aluminum equivalent approximation, and such approximations should be avoided in future astronaut risk estimates. In this study, the aluminum equivalent approximation is evaluated as a means for estimating the particle environment within a spacecraft structure induced by the GCR radiation field. For comparison, the two extremes of the GCR environment, the 1977 solar minimum and the 2001 solar maximum, are considered. These environments are coupled to the Langley Research Center (LaRC) deterministic ionized particle transport code High charge (Z) and Energy TRaNsport (HZETRN), which propagates the GCR spectra for elements with charges (Z) in the range I <= Z <= 28 (H -- Ni) and secondary neutrons through selected target materials. The coupling of the GCR extremes to HZETRN allows for the examination of the induced environment within the interior' of an idealized spacecraft as approximated by a spherical shell shield, and the effects of the aluminum equivalent approximation for a good polymeric shield material such as genetic polyethylene (PE). The shield thickness is represented by a 25 g/cm spherical shell. Although one could imagine the progression to greater thickness, the current range will be sufficient to evaluate the qualitative usefulness of the aluminum equivalent approximation. Upon establishing the inaccuracies of the aluminum equivalent approximation through numerical simulations of the GCR radiation field attenuation for PE and aluminum equivalent PE spherical shells, we Anther present results for a limited set of commercially available, hydrogen rich, multifunctional polymeric constituents to assess the effect of the aluminum equivalent approximation on their radiation attenuation response as compared to the generic PE.

  17. A diffusion approximation for ocean wave scatterings by randomly distributed ice floes

    NASA Astrophysics Data System (ADS)

    Zhao, Xin; Shen, Hayley

    2016-11-01

    This study presents a continuum approach using a diffusion approximation method to solve the scattering of ocean waves by randomly distributed ice floes. In order to model both strong and weak scattering, the proposed method decomposes the wave action density function into two parts: the transmitted part and the scattered part. For a given wave direction, the transmitted part of the wave action density is defined as the part of wave action density in the same direction before the scattering; and the scattered part is a first order Fourier series approximation for the directional spreading caused by scattering. An additional approximation is also adopted for simplification, in which the net directional redistribution of wave action by a single scatterer is assumed to be the reflected wave action of a normally incident wave into a semi-infinite ice cover. Other required input includes the mean shear modulus, diameter and thickness of ice floes, and the ice concentration. The directional spreading of wave energy from the diffusion approximation is found to be in reasonable agreement with the previous solution using the Boltzmann equation. The diffusion model provides an alternative method to implement wave scattering into an operational wave model.

  18. Special Programs in Medical Library Education, 1957-1971: Part III. The Trainees *†

    PubMed Central

    Roper, Fred W.

    1974-01-01

    This report describes the personal characteristics of the former trainees and their opinions about their training program experiences. More of the degree program trainees were under thirty (71%) than was the case with the internship program trainees (45%). The male-female ratio for each of the two groups is approximately 1:4. Approximately 60% of the degree program trainees entered their training with majors in the natural or health sciences, while less than 50% of the total group hold degrees in the natural or health sciences. Slightly less than 60% of the total group of trainees were employed in medical libraries in 1971. However, 68.5% of the internship program trainees as compared to 46.0% of the degree program trainees held positions in medical libraries. The reasons cited most often for leaving medical librarianship were the lack of available positions and student status. The major reasons indicated by the former trainees for entering the medical library education programs were an interest in the biomedical subject fields, the availability of funds, and the desire to gain experience. The reactions of the former trainees to their training program experiences were favorable. PMID:4462687

  19. HIV testing patterns among urban YMSM of color.

    PubMed

    Leonard, Noelle R; Rajan, Sonali; Gwadz, Marya V; Aregbesola, Temi

    2014-12-01

    The heightened level of risk for HIV infection among Black and Latino young men who have sex with men (YMSM) is driven by multilevel influences. Using cross-sectional data, we examined HIV testing patterns among urban YMSM of color in a high-HIV seroprevalence area (ages 16 to 21 years). Self-reported frequency of testing was high, with 42% of youth reporting testing at a greater frequency than recommended guidelines. There were no differences between less frequent and more frequent testers on sexual risk behaviors. Most (80%) youth cited reassurance of HIV-negative status as a reason for testing. Further, over half of the sample reported numerous other reasons for HIV testing, which spanned individual, partner, social, and structural levels of influence. Approximately half of respondents indicated that peers, family members, and counselors influenced their motivation to get tested. Of concern, their first HIV test occurred approximately 2 years after their first sexual experience with another male. These results indicate the need to consider developmental issues as well as comprehensive, multilevel efforts to ensure that YMSM of color test at the Centers for Disease Control and Prevention-recommended frequency but not less than this or too frequently. © 2014 Society for Public Health Education.

  20. HIV Testing Patterns Among Urban YMSM of Color

    PubMed Central

    Leonard, Noelle R.; Ragan, Sonali; Gwadz, Marya V.; Aregbesola, Temi

    2015-01-01

    The heightened level of risk for HIV infection among African-American and Latino young men who have sex with men (YMSM) is driven by multi-level influences. Using cross-sectional data, we examined HIV testing patterns among urban YMSM of color in a high HIV sero-prevalence area (ages 16 to 21 years). Self-reported frequency of testing was high with 42% of youth reporting testing at a greater frequency than recommended guidelines. There were no differences between less frequent and high frequent testers on sexual risk behaviors. Most (80%) youth cited reassurance of HIV-negative status as a reason for testing. Further, over half of the sample reported numerous other reasons for HIV testing, which spanned individual, partner, social, and structural levels of influence. Approximately half of respondents indicated that peers, family members, and counselors influenced their motivation to get tested. Of concern, youths’ first HIV test occurred approximately two years after their first sexual experience with another male. These results indicate the need to consider developmental issues as well as for comprehensive, multi-level efforts to ensure that YMSM of color test at the CDC-recommended frequency, but not less than this or too frequently. PMID:24973260

  1. High frequency of ribosomal protein gene deletions in Italian Diamond-Blackfan anemia patients detected by multiplex ligation-dependent probe amplification assay

    PubMed Central

    Quarello, Paola; Garelli, Emanuela; Brusco, Alfredo; Carando, Adriana; Mancini, Cecilia; Pappi, Patrizia; Vinti, Luciana; Svahn, Johanna; Dianzani, Irma; Ramenghi, Ugo

    2012-01-01

    Diamond-Blackfan anemia is an autosomal dominant disease due to mutations in nine ribosomal protein encoding genes. Because most mutations are loss of function and detected by direct sequencing of coding exons, we reasoned that part of the approximately 50% mutation negative patients may have carried a copy number variant of ribosomal protein genes. As a proof of concept, we designed a multiplex ligation-dependent probe amplification assay targeted to screen the six genes that are most frequently mutated in Diamond-Blackfan anemia patients: RPS17, RPS19, RPS26, RPL5, RPL11, and RPL35A. Using this assay we showed that deletions represent approximately 20% of all mutations. The combination of sequencing and multiplex ligation-dependent probe amplification analysis of these six genes allows the genetic characterization of approximately 65% of patients, showing that Diamond-Blackfan anemia is indisputably a ribosomopathy. PMID:22689679

  2. TPMG Northern California appointments and advice call center.

    PubMed

    Conolly, Patricia; Levine, Leslie; Amaral, Debra J; Fireman, Bruce H; Driscoll, Tom

    2005-08-01

    Kaiser Permanente (KP) has been developing its use of call centers as a way to provide an expansive set of healthcare services to KP members efficiently and cost effectively. Since 1995, when The Permanente Medical Group (TPMG) began to consolidate primary care phone services into three physical call centers, the TPMG Appointments and Advice Call Center (AACC) has become the "front office" for primary care services across approximately 89% of Northern California. The AACC provides primary care phone service for approximately 3 million Kaiser Foundation Health Plan members in Northern California and responds to approximately 1 million calls per month across the three AACC sites. A database records each caller's identity as well as the day, time, and duration of each call; reason for calling; services provided to callers as a result of calls; and clinical outcomes of calls. We here summarize this information for the period 2000 through 2003.

  3. The Relationship between American Sign Language Vocabulary and the Development of Language-Based Reasoning Skills in Deaf Children

    ERIC Educational Resources Information Center

    Henner, Jonathan

    2016-01-01

    The language-based analogical reasoning abilities of Deaf children are a controversial topic. Researchers lack agreement about whether Deaf children possess the ability to reason using language-based analogies, or whether this ability is limited by a lack of access to vocabulary, both written and signed. This dissertation examines factors that…

  4. Overcoming limitations of model-based diagnostic reasoning systems

    NASA Technical Reports Server (NTRS)

    Holtzblatt, Lester J.; Marcotte, Richard A.; Piazza, Richard L.

    1989-01-01

    The development of a model-based diagnostic system to overcome the limitations of model-based reasoning systems is discussed. It is noted that model-based reasoning techniques can be used to analyze the failure behavior and diagnosability of system and circuit designs as part of the system process itself. One goal of current research is the development of a diagnostic algorithm which can reason efficiently about large numbers of diagnostic suspects and can handle both combinational and sequential circuits. A second goal is to address the model-creation problem by developing an approach for using design models to construct the GMODS model in an automated fashion.

  5. Integrating home-based medication therapy management (MTM) services in a health system.

    PubMed

    Reidt, Shannon; Holtan, Haley; Stender, Jennifer; Salvatore, Toni; Thompson, Bruce

    2016-01-01

    To describe the integration of home-based Medication Therapy Management (MTM) into the ambulatory care infrastructure of a large urban health system and to discuss the outcomes of this service. Minnesota from September 2012 to December 2013. The health system has more than 50 primary care and specialty clinics. Eighteen credentialed MTM pharmacists are located in 16 different primary care and specialty settings, with the greatest number of pharmacists providing services in the internal medicine clinic. Home-based MTM was promoted throughout the clinics within the health system. Physicians, advanced practice providers, nurses, and pharmacists could refer patients to receive MTM in their homes. A home visit had the components of a clinic-based visit and was documented in the electronic health record (EHR); however, providing the service in the home allowed for a more direct assessment of environmental factors affecting medication use. Number of home MTM referrals, reason for referral and type of referring provider, number and type of medication-related problems (MRPs). In the first 15 months, 74 home visits were provided to 53 patients. Sixty-six percent of the patients were referred from the Internal Medicine Clinic. Referrals were also received from the senior care, coordinated care, and psychiatry clinics. Approximately 50% of referrals were made by physicians. More referrals (23%) were made by pharmacists compared with advanced practice providers, who made 21% of referrals. The top 3 reasons for referral were: nonadherence, transportation barriers, and the need for medication reconciliation with a home care nurse. Patients had a median of 3 MRPs with the most common (40%) MRP related to compliance. Home-based MTM is feasibly delivered within the ambulatory care infrastructure of a health system with sufficient provider engagement as demonstrated by referrals to the service. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  6. Reasons for Not Participating in Scleroderma Patient Support Groups: A Cross-Sectional Study.

    PubMed

    Gumuchian, Stephanie T; Delisle, Vanessa C; Peláez, Sandra; Malcarne, Vanessa L; El-Baalbaki, Ghassan; Kwakkenbos, Linda; Jewett, Lisa R; Carrier, Marie-Eve; Pépin, Mia; Thombs, Brett D

    2018-02-01

    Peer-led support groups are an important resource for many people with scleroderma (systemic sclerosis; SSc). Little is known, however, about barriers to participation. The objective of this study was to identify reasons why some people with SSc do not participate in SSc support groups. A 21-item survey was used to assess reasons for nonattendance among SSc patients in Canada and the US. Exploratory factor analysis (EFA) was conducted, using the software MPlus 7, to group reasons for nonattendance into themes. A total of 242 people (202 women) with SSc completed the survey. EFA results indicated that a 3-factor model best described the data (χ 2 [150] = 302.7; P < 0.001; Comparative Fit Index = 0.91, Tucker-Lewis Index = 0.88, root mean square error of approximation = 0.07, factor intercorrelations 0.02-0.43). The 3 identified themes, reflecting reasons for not attending SSc support groups were personal reasons (9 items; e.g., already having enough support), practical reasons (7 items; e.g., no local support groups available), and beliefs about support groups (5 items; e.g., support groups are too negative). On average, respondents rated 4.9 items as important or very important reasons for nonattendance. The 2 items most commonly rated as important or very important were 1) already having enough support from family, friends, or others, and 2) not knowing of any SSc support groups offered in my area. SSc organizations may be able to address limitations in accessibility and concerns about SSc support groups by implementing online support groups, better informing patients about support group activities, and training support group facilitators. © 2017, American College of Rheumatology.

  7. Neural Correlates of Post-Conventional Moral Reasoning: A Voxel-Based Morphometry Study

    PubMed Central

    Prehn, Kristin; Korczykowski, Marc; Rao, Hengyi; Fang, Zhuo; Detre, John A.; Robertson, Diana C.

    2015-01-01

    Going back to Kohlberg, moral development research affirms that people progress through different stages of moral reasoning as cognitive abilities mature. Individuals at a lower level of moral reasoning judge moral issues mainly based on self-interest (personal interests schema) or based on adherence to laws and rules (maintaining norms schema), whereas individuals at the post-conventional level judge moral issues based on deeper principles and shared ideals. However, the extent to which moral development is reflected in structural brain architecture remains unknown. To investigate this question, we used voxel-based morphometry and examined the brain structure in a sample of 67 Master of Business Administration (MBA) students. Subjects completed the Defining Issues Test (DIT-2) which measures moral development in terms of cognitive schema preference. Results demonstrate that subjects at the post-conventional level of moral reasoning were characterized by increased gray matter volume in the ventromedial prefrontal cortex and subgenual anterior cingulate cortex, compared with subjects at a lower level of moral reasoning. Our findings support an important role for both cognitive and emotional processes in moral reasoning and provide first evidence for individual differences in brain structure according to the stages of moral reasoning first proposed by Kohlberg decades ago. PMID:26039547

  8. Minimally inconsistent reasoning in Semantic Web.

    PubMed

    Zhang, Xiaowang

    2017-01-01

    Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning.

  9. Minimally inconsistent reasoning in Semantic Web

    PubMed Central

    Zhang, Xiaowang

    2017-01-01

    Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning. PMID:28750030

  10. Content-related interactions and methods of reasoning within self-initiated organic chemistry study groups

    NASA Astrophysics Data System (ADS)

    Christian, Karen Jeanne

    2011-12-01

    Students often use study groups to prepare for class or exams; yet to date, we know very little about how these groups actually function. This study looked at the ways in which undergraduate organic chemistry students prepared for exams through self-initiated study groups. We sought to characterize the methods of social regulation, levels of content processing, and types of reasoning processes used by students within their groups. Our analysis showed that groups engaged in predominantly three types of interactions when discussing chemistry content: co-construction, teaching, and tutoring. Although each group engaged in each of these types of interactions at some point, their prevalence varied between groups and group members. Our analysis suggests that the types of interactions that were most common depended on the relative content knowledge of the group members as well as on the difficulty of the tasks in which they were engaged. Additionally, we were interested in characterizing the reasoning methods used by students within their study groups. We found that students used a combination of three content-relevant methods of reasoning: model-based reasoning, case-based reasoning, or rule-based reasoning, in conjunction with one chemically-irrelevant method of reasoning: symbol-based reasoning. The most common way for groups to reason was to use rules, whereas the least common way was for students to work from a model. In general, student reasoning correlated strongly to the subject matter to which students were paying attention, and was only weakly related to student interactions. Overall, results from this study may help instructors to construct appropriate tasks to guide what and how students study outside of the classroom. We found that students had a decidedly strategic approach in their study groups, relying heavily on material provided by their instructors, and using the reasoning strategies that resulted in the lowest levels of content processing. We suggest that instructors create more opportunities for students to explore model-based reasoning, and to create opportunities for students to be able to co-construct in a collaborative manner within the context of their organic chemistry course.

  11. Minimal-Approximation-Based Distributed Consensus Tracking of a Class of Uncertain Nonlinear Multiagent Systems With Unknown Control Directions.

    PubMed

    Choi, Yun Ho; Yoo, Sung Jin

    2017-03-28

    A minimal-approximation-based distributed adaptive consensus tracking approach is presented for strict-feedback multiagent systems with unknown heterogeneous nonlinearities and control directions under a directed network. Existing approximation-based consensus results for uncertain nonlinear multiagent systems in lower-triangular form have used multiple function approximators in each local controller to approximate unmatched nonlinearities of each follower. Thus, as the follower's order increases, the number of the approximators used in its local controller increases. However, the proposed approach employs only one function approximator to construct the local controller of each follower regardless of the order of the follower. The recursive design methodology using a new error transformation is derived for the proposed minimal-approximation-based design. Furthermore, a bounding lemma on parameters of Nussbaum functions is presented to handle the unknown control direction problem in the minimal-approximation-based distributed consensus tracking framework and the stability of the overall closed-loop system is rigorously analyzed in the Lyapunov sense.

  12. Rilpivirine versus etravirine validity in NNRTI-based treatment failure in Thailand.

    PubMed

    Teeranaipong, Phairote; Sirivichayakul, Sunee; Mekprasan, Suwanna; Ruxrungtham, Kiat; Putcharoen, Opass

    2014-01-01

    Etravirine (ETR) and rilpivirine (RPV) are the second-generation non-nucleoside reverse transcriptase inhibitors (NNRTI) for treatment of HIV-1 infection. Etravirine is recommended for patients with virologic failure from first generation NNRTI-based regimen [1]. RPV has profile with similar properties to ETR but this agent is approved for treatment-naïve patients [2]. In Thailand, ETR is approximately 45 times more expensive than RPV. We aimed to study the patterns of genotypic resistance and possibility of using RPV in patients with virologic failure from two common NNRTI-based regimens: efavirenz (EFV)- or nevirapine (NVP)-based regimen. Data of clinical samples with confirmed virologic failure during 2003-2010 were reviewed. We selected the samples from patients who failed EFV- or NVP-based regimen. Resistance-associated mutations (RAMs) were determined by IAS-USA Drug Resistance Mutations. DUET, Monogram scoring system and Stanford Genotypic Resistance Interpretation were applied to determine the susceptibility of ETR and RPV. A total of 2086 samples were analyzed. Samples from 1482 patients with virologic failure from NVP-based regimen treatment failure (NVP group) and 604 patients with virologic failure from EFV-based regimen treatment failure (EFV group) were included. 95% of samples were HIV-1 CRF01_AE subtype. Approximately 80% of samples in each group had one to three NNRTI-RAMs and 20% had four to seven NNRTI-RAMs. 181C mutation was the most common NVP-associated RAM (54.3% vs 14.7%, p<0.01). 103N mutation was the most common EFV-associated RAM (56.5% vs 19.1%, p<0.01). The calculated scores from all three scoring systems were concordant. In NVP group, 165 (11.1%) and 161 (10.9%) patients were susceptible to ETR and RPV, respectively (p=0.81). In EFV group, 195 (32.2%) and 191 (31.6%) patients were susceptible to ETR and RPV, respectively (p=0.81). The proportions of viruses that remained susceptible to ETR and RPV in EFV group were significantly higher than NPV group (ETR susceptibility 32.2% vs 11.1%, p<0.01, RPV susceptibility 31.6% vs 10.9%, p<0.01), respectively. RPV might be a cost saving and reasonable second line NNRTI for patients who failed EFV- or NVP-containing regimens, especially in resource-limited setting because these two agents have comparable susceptibility identified by genotyping. From our study, approximately 30% of patients who failed EFV-based regimens had viruses that remained susceptible to RPV.

  13. On the unreasonable effectiveness of the post-Newtonian approximation in gravitational physics

    PubMed Central

    Will, Clifford M.

    2011-01-01

    The post-Newtonian approximation is a method for solving Einstein’s field equations for physical systems in which motions are slow compared to the speed of light and where gravitational fields are weak. Yet it has proven to be remarkably effective in describing certain strong-field, fast-motion systems, including binary pulsars containing dense neutron stars and binary black hole systems inspiraling toward a final merger. The reasons for this effectiveness are largely unknown. When carried to high orders in the post-Newtonian sequence, predictions for the gravitational-wave signal from inspiraling compact binaries will play a key role in gravitational-wave detection by laser-interferometric observatories. PMID:21447714

  14. Preliminary characterization of a one-axis acoustic system. [acoustic levitation for space processing

    NASA Technical Reports Server (NTRS)

    Oran, W. A.; Reiss, D. A.; Berge, L. H.; Parker, H. W.

    1979-01-01

    The acoustic fields and levitation forces produced along the axis of a single-axis resonance system were measured. The system consisted of a St. Clair generator and a planar reflector. The levitation force was measured for bodies of various sizes and geometries (i.e., spheres, cylinders, and discs). The force was found to be roughly proportional to the volume of the body until the characteristic body radius reaches approximately 2/k (k = wave number). The acoustic pressures along the axis were modeled using Huygens principle and a method of imaging to approximate multiple reflections. The modeled pressures were found to be in reasonable agreement with those measured with a calibrated microphone.

  15. Multispecies lottery competition: a diffusion analysis

    USGS Publications Warehouse

    Hatfield, J.S.; Chesson, P.L.; Tuljapurkar, S.; Caswell, H.

    1997-01-01

    The lottery model is a stochastic competition model designed for space-limited communities of sedentary organisms. Examples of such communities include coral reef fishes, aquatic sessile organisms, and many plant communities. Explicit conditions for the coexistence of two species and the stationary distribution of the two-species model were determined previously using an approximation with a diffusion process. In this chapter, a diffusion approximation is presented for the multispecies model for communities of two or more species, and a stage-structured model is investigated. The stage-structured model would be more reasonable for communities of long-lived species such as trees in a forest in which recruitment and death rates depend on the age or stage of the individuals.

  16. Combining the modified Skyrme-like model and the local density approximation to determine the symmetry energy of nuclear matter

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Ren, Zhongzhou; Xu, Chang

    2018-07-01

    Combining the modified Skyrme-like model and the local density approximation model, the slope parameter L of symmetry energy is extracted from the properties of finite nuclei with an improved iterative method. The calculations of the iterative method are performed within the framework of the spherical symmetry. By choosing 200 neutron rich nuclei on 25 isotopic chains as candidates, the slope parameter is constrained to be 50 MeV < L < 62 MeV. The validity of this method is examined by the properties of finite nuclei. Results show that reasonable descriptions on the properties of finite nuclei and nuclear matter can be obtained together.

  17. Hybrid diagnostic system: beacon-based exception analysis for multimissions - Livingstone integration

    NASA Technical Reports Server (NTRS)

    Park, Han G.; Cannon, Howard; Bajwa, Anupa; Mackey, Ryan; James, Mark; Maul, William

    2004-01-01

    This paper describes the initial integration of a hybrid reasoning system utilizing a continuous domain feature-based detector, Beacon-based Exceptions Analysis for Multimissions (BEAM), and a discrete domain model-based reasoner, Livingstone.

  18. Entomopathogenic nematodes in the European biocontrol market.

    PubMed

    Ehlers, R U

    2003-01-01

    In Europe total revenues in the biocontrol market have reached approximately 200 million Euros. The sector with the highest turn-over is the market for beneficial invertebrates with a 55% share, followed by microbial agents with approximately 25%. Annual growth rates of up to 20% have been estimated. Besides microbial plant protection products that are currently in the process of re-registration, several microbial products have been registered or are in the process of registration, following the EU directive 91/414. Entomopathogenic nematodes (EPN) are exceptionally safe biocontrol agents. Until today, they are exempted from registration in most European countries, the reason why SMEs were able to offer economically reasonable nematode-based products. The development of technology for mass production in liquid media significantly reduced the product costs and accelerated the introduction of nematode products in tree nurseries, ornamentals, strawberries, mushrooms, citrus and turf. Progress in storage and formulation technology has resulted in high quality products which are more resistant to environmental extremes occurring during transportation to the user. The cooperation between science, industry and extension within the EU COST Action 819 has supported the development of quality control methods. Today four companies produce EPN in liquid culture, offering 8 different nematode species. Problems with soil insects are increasing. Grubs, like Melolontha melolontha and other scarabaeidae cause damage in orchards and turf. Since the introduction of the Western Corn Rootworm Diabrotica virgifera into Serbia in 1992, this pests as spread all over the Balkan Region and has reached Italy, France and Austria. These soil insect pests are potential targets for EPN. The development of insecticide resistance has opened another sector for EPN. Novel adjuvants used to improve formulation of EPN have enabled the foliar application against Western Flower Thrips and Plutella xylostella. To reach these markets, the product costs for EPN will have to further decrease in the future. One possibility to reduce application costs related with the use of EPN is the inoculative application to cause long term effects on pest populations.

  19. 29 CFR 1625.7 - Differentiations based on reasonable factors other than age.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... 1625.7 Section 1625.7 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION AGE DISCRIMINATION IN EMPLOYMENT ACT Interpretations § 1625.7 Differentiations based on reasonable... age is discriminatory unless the practice is justified by a “reasonable factor other than age.” An...

  20. 29 CFR 1625.7 - Differentiations based on reasonable factors other than age.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... 1625.7 Section 1625.7 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION AGE DISCRIMINATION IN EMPLOYMENT ACT Interpretations § 1625.7 Differentiations based on reasonable... age is discriminatory unless the practice is justified by a “reasonable factor other than age.” An...

Top