Sample records for difficult analytical problems

  1. Teaching Analytical Chemistry to Pharmacy Students: A Combined, Iterative Approach

    ERIC Educational Resources Information Center

    Masania, Jinit; Grootveld, Martin; Wilson, Philippe B.

    2018-01-01

    Analytical chemistry has often been a difficult subject to teach in a classroom or lecture-based context. Numerous strategies for overcoming the inherently practical-based difficulties have been suggested, each with differing pedagogical theories. Here, we present a combined approach to tackling the problem of teaching analytical chemistry, with…

  2. Educational Reform as a Dynamic System of Problems and Solutions: Towards an Analytic Instrument

    ERIC Educational Resources Information Center

    Luttenberg, Johan; Carpay, Thérèse; Veugelers, Wiel

    2013-01-01

    Large-scale educational reforms are difficult to realize and often fail. In the literature, the course of reform and problems associated with this are frequently discussed. The explanations and recommendations then provided are so diverse that it is difficult to gain a comprehensive overview of what factors are at play and how to take them into…

  3. Monitoring Affect States during Effortful Problem Solving Activities

    ERIC Educational Resources Information Center

    D'Mello, Sidney K.; Lehman, Blair; Person, Natalie

    2010-01-01

    We explored the affective states that students experienced during effortful problem solving activities. We conducted a study where 41 students solved difficult analytical reasoning problems from the Law School Admission Test. Students viewed videos of their faces and screen captures and judged their emotions from a set of 14 states (basic…

  4. The Problem Solving Studio: An Apprenticeship Environment for Aspiring Engineers

    ERIC Educational Resources Information Center

    Le Doux, Joseph M.; Waller, Alisha A.

    2016-01-01

    This paper describes the problem-solving studio (PSS) learning environment. PSS was designed to teach students how to solve difficult analytical engineering problems without resorting to rote memorization of algorithms, while at the same time developing their deep conceptual understanding of the course topics. There are several key features of…

  5. Synthesis of Feedback Controller for Chaotic Systems by Means of Evolutionary Techniques

    NASA Astrophysics Data System (ADS)

    Senkerik, Roman; Oplatkova, Zuzana; Zelinka, Ivan; Davendra, Donald; Jasek, Roman

    2011-06-01

    This research deals with a synthesis of control law for three selected discrete chaotic systems by means of analytic programming. The novality of the approach is that a tool for symbolic regression—analytic programming—is used for such kind of difficult problem. The paper consists of the descriptions of analytic programming as well as chaotic systems and used cost function. For experimentation, Self-Organizing Migrating Algorithm (SOMA) with analytic programming was used.

  6. Online Video Tutorials Increase Learning of Difficult Concepts in an Undergraduate Analytical Chemistry Course

    ERIC Educational Resources Information Center

    He, Yi; Swenson, Sandra; Lents, Nathan

    2012-01-01

    Educational technology has enhanced, even revolutionized, pedagogy in many areas of higher education. This study examines the incorporation of video tutorials as a supplement to learning in an undergraduate analytical chemistry course. The concepts and problems in which students faced difficulty were first identified by assessing students'…

  7. Limitless Analytic Elements

    NASA Astrophysics Data System (ADS)

    Strack, O. D. L.

    2018-02-01

    We present equations for new limitless analytic line elements. These elements possess a virtually unlimited number of degrees of freedom. We apply these new limitless analytic elements to head-specified boundaries and to problems with inhomogeneities in hydraulic conductivity. Applications of these new analytic elements to practical problems involving head-specified boundaries require the solution of a very large number of equations. To make the new elements useful in practice, an efficient iterative scheme is required. We present an improved version of the scheme presented by Bandilla et al. (2007), based on the application of Cauchy integrals. The limitless analytic elements are useful when modeling strings of elements, rivers for example, where local conditions are difficult to model, e.g., when a well is close to a river. The solution of such problems is facilitated by increasing the order of the elements to obtain a good solution. This makes it unnecessary to resort to dividing the element in question into many smaller elements to obtain a satisfactory solution.

  8. Using Syntactic Patterns to Enhance Text Analytics

    ERIC Educational Resources Information Center

    Meyer, Bradley B.

    2017-01-01

    Large scale product and service reviews proliferate and are commonly found across the web. The ability to harvest, digest and analyze a large corpus of reviews from online websites is still however a difficult problem. This problem is referred to as "opinion mining." Opinion mining is an important area of research as advances in the…

  9. SOIL AND SEDIMENT SAMPLING METHODS

    EPA Science Inventory

    The EPA Office of Solid Waste and Emergency Response's (OSWER) Office of Superfund Remediation and Technology Innovation (OSRTI) needs innovative methods and techniques to solve new and difficult sampling and analytical problems found at the numerous Superfund sites throughout th...

  10. A Bridge between Two Important Problems in Optics and Electrostatics

    ERIC Educational Resources Information Center

    Capelli, R.; Pozzi, G.

    2008-01-01

    It is shown how the same physically appealing method can be applied to find analytic solutions for two difficult and apparently unrelated problems in optics and electrostatics. They are: (i) the diffraction of a plane wave at a perfectly conducting thin half-plane and (ii) the electrostatic field associated with a parallel array of stripes held at…

  11. The Shape of a Sausage: A Challenging Problem in the Calculus of Variations

    ERIC Educational Resources Information Center

    Deakin, Michael A. B.

    2010-01-01

    Many familiar household objects (such as sausages) involve the maximization of a volume under geometric constraints. A flexible but inextensible membrane bounds a volume which is to be filled to capacity. In the case of the sausage, a full analytic solution is here provided. Other related but more difficult problems seem to demand approximate…

  12. Levels of Simplification. The Use of Assumptions, Restrictions, and Constraints in Engineering Analysis.

    ERIC Educational Resources Information Center

    Whitaker, Stephen

    1988-01-01

    Describes the use of assumptions, restrictions, and constraints in solving difficult analytical problems in engineering. Uses the Navier-Stokes equations as examples to demonstrate use, derivations, advantages, and disadvantages of the technique. (RT)

  13. Approximate method for calculating a thickwalled cylinder with rigidly clamped ends

    NASA Astrophysics Data System (ADS)

    Andreev, Vladimir

    2018-03-01

    Numerous papers dealing with the calculations of cylindrical bodies [1 -8 and others] have shown that analytic and numerical-analytical solutions in both homogeneous and inhomogeneous thick-walled shells can be obtained quite simply, using expansions in Fourier series on trigonometric functions, if the ends are hinged movable (sliding support). It is much more difficult to solve the problem of calculating shells with builtin ends.

  14. Functional Analytic Psychotherapy Is a Framework for Implementing Evidence-Based Practices: The Example of Integrated Smoking Cessation and Depression Treatment

    ERIC Educational Resources Information Center

    Holman, Gareth; Kohlenberg, Robert J.; Tsai, Mavis; Haworth, Kevin; Jacobson, Emily; Liu, Sarah

    2012-01-01

    Depression and cigarette smoking are recurrent, interacting problems that co-occur at high rates and--especially when depression is chronic--are difficult to treat and associated with costly health consequences. In this paper we present an integrative therapeutic framework for concurrent treatment of these problems based on evidence-based…

  15. Parameter estimation in astronomy through application of the likelihood ratio. [satellite data analysis techniques

    NASA Technical Reports Server (NTRS)

    Cash, W.

    1979-01-01

    Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.

  16. Comments on an Analytical Thermal Agglomeration for Problems with Surface Growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, N. E.

    2017-03-22

    Up until Dec 2016, the thermal agglomeration was very heuristic, and as such, difficult to define. The lack of predictability became problematic, and the current notes represent the first real attempt to systematize the specification of the agglomerated process parameters.

  17. Evaluation of seismic spatial interaction effects through an impact testing program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, B.D.; Driesen, G.E.

    The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous ``sources`` and ``targets`` requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to ``calibrate`` the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less

  18. Evaluation of seismic spatial interaction effects through an impact testing program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, B.D.; Driesen, G.E.

    The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous sources'' and targets'' requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to calibrate'' the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less

  19. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  20. Measuring the Impact of Business Rules on Inventory Balancing

    DTIC Science & Technology

    2013-09-01

    The Navy ERP system enables inventory to be redistributed across sites to help maintain optimum inventory levels. Holding too much inventory is...not unique to the Navy. In fact, the complexity of this problem is only magnified for competitive firms that are hesitant to share sensitive data with...lateral transshipment problems makes finding an analytical solution extremely difficult. The strength of simulation models lies within their ability

  1. Distinguishing PCB Isomeric Congeners with their Gas Chromatographic and Mass Spectrometric Ortho Effect using Comprehensive Gas Chromatography

    EPA Science Inventory

    The 209 polychlorinated biphenyl (PCB) congeners and associated nine isomeric groups (nine groups of PCBs with the same degree of chlorination) have been long recorded as high endocrine disrupting chemicals in the environment. Difficult analytical problems exist, in those frequen...

  2. GC/FT-IR ANALYSIS OF THE THERMALLY LABILE COMPOUND TRIS (2,3-DIBROMOPROPYL) PHOSPHATE

    EPA Science Inventory

    A fast and convenient GC method has been developed for a compound [tris(2,3-dibromopropyl)phosphate] that poses a difficult analytical problem for both GC (thermal instability/low volatility) and LC (not amenable to commonly available, sensitive detectors) analysis. his method em...

  3. Symmetric tridiagonal structure preserving finite element model updating problem for the quadratic model

    NASA Astrophysics Data System (ADS)

    Rakshit, Suman; Khare, Swanand R.; Datta, Biswa Nath

    2018-07-01

    One of the most important yet difficult aspect of the Finite Element Model Updating Problem is to preserve the finite element inherited structures in the updated model. Finite element matrices are in general symmetric, positive definite (or semi-definite) and banded (tridiagonal, diagonal, penta-diagonal, etc.). Though a large number of papers have been published in recent years on various aspects of solutions of this problem, papers dealing with structure preservation almost do not exist. A novel optimization based approach that preserves the symmetric tridiagonal structures of the stiffness and damping matrices is proposed in this paper. An analytical expression for the global minimum solution of the associated optimization problem along with the results of numerical experiments obtained by both the analytical expressions and by an appropriate numerical optimization algorithm are presented. The results of numerical experiments support the validity of the proposed method.

  4. Cognitive Load Mediates the Effect of Emotion on Analytical Thinking.

    PubMed

    Trémolière, Bastien; Gagnon, Marie-Ève; Blanchette, Isabelle

    2016-11-01

    Although the detrimental effect of emotion on reasoning has been evidenced many times, the cognitive mechanism underlying this effect remains unclear. In the present paper, we explore the cognitive load hypothesis as a potential explanation. In an experiment, participants solved syllogistic reasoning problems with either neutral or emotional contents. Participants were also presented with a secondary task, for which the difficult version requires the mobilization of cognitive resources to be correctly solved. Participants performed overall worse and took longer on emotional problems than on neutral problems. Performance on the secondary task, in the difficult version, was poorer when participants were reasoning about emotional, compared to neutral contents, consistent with the idea that processing emotion requires more cognitive resources. Taken together, the findings afford evidence that the deleterious effect of emotion on reasoning is mediated by cognitive load.

  5. ION COMPOSITION ELUCIDATION (ICE): A HIGH RESOLUTION MASS SPECTROMETRIC TECHNIQUE FOR CHARACTERIZATION AND IDENTIFICATION OF ORGANIC COMPOUNDS

    EPA Science Inventory

    Identifying compounds found in the environment without knowledge of their origin is a very difficult analytical problem. Comparison of the low resolution mass spectrum of a compound with those in the NIST or Wiley mass spectral libraries can provide a tentative identification whe...

  6. Explicit solution techniques for impact with contact constraints

    NASA Technical Reports Server (NTRS)

    Mccarty, Robert E.

    1993-01-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  7. Explicit solution techniques for impact with contact constraints

    NASA Astrophysics Data System (ADS)

    McCarty, Robert E.

    1993-08-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  8. Using the AHP in a Workshop Setting to Elicit and Prioritize Fire Research Needs

    Treesearch

    Daniel L. Schmoldt; David L. Peterson

    1997-01-01

    The benefits of convening a group of knowledgeable specialists together in a workshop setting to tackle a difficult problem can often be offset by an over-abundance of unfocused and rambling discussion and by counterproductive group dynamics. In light of this workshop paradox, we have created a generic workshop framework based on the analytic hierarchy process, that...

  9. The Development of Proofs in Analytical Mathematics for Undergraduate Students

    NASA Astrophysics Data System (ADS)

    Ali, Maselan; Sufahani, Suliadi; Hasim, Nurnazifa; Saifullah Rusiman, Mohd; Roslan, Rozaini; Mohamad, Mahathir; Khalid, Kamil

    2018-04-01

    Proofs in analytical mathematics are essential parts of mathematics, difficult to learn because its underlying concepts are not visible. This research consists of problems involving logic and proofs. In this study, a short overview was provided on how proofs in analytical mathematics were used by university students. From the results obtained, excellent students obtained better scores compared to average and poor students. The research instruments used in this study consisted of two parts: test and interview. In this way, analysis of students’ actual performances can be obtained. The result of this study showed that the less able students have fragile conceptual and cognitive linkages but the more able students use their strong conceptual linkages to produce effective solutions

  10. The Emanuel Miller Memorial Lecture 2006: Adoption as Intervention. Meta-Analytic Evidence for Massive Catch-Up and Plasticity in Physical, Socio-Emotional, and Cognitive Development

    ERIC Educational Resources Information Center

    Van IJzendoorn, Marinus H.; Juffer, Femmie

    2006-01-01

    Background: Adopted children have been said to be difficult children, scarred by their past experiences in maltreating families or neglecting orphanages, or by genetic or pre- and perinatal problems. Is (domestic or international) adoption an effective intervention in the developmental domains of physical growth, attachment security, cognitive…

  11. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  12. Linear complementarity formulation for 3D frictional sliding problems

    USGS Publications Warehouse

    Kaven, Joern; Hickman, Stephen H.; Davatzes, Nicholas C.; Mutlu, Ovunc

    2012-01-01

    Frictional sliding on quasi-statically deforming faults and fractures can be modeled efficiently using a linear complementarity formulation. We review the formulation in two dimensions and expand the formulation to three-dimensional problems including problems of orthotropic friction. This formulation accurately reproduces analytical solutions to static Coulomb friction sliding problems. The formulation accounts for opening displacements that can occur near regions of non-planarity even under large confining pressures. Such problems are difficult to solve owing to the coupling of relative displacements and tractions; thus, many geomechanical problems tend to neglect these effects. Simple test cases highlight the importance of including friction and allowing for opening when solving quasi-static fault mechanics models. These results also underscore the importance of considering the effects of non-planarity in modeling processes associated with crustal faulting.

  13. Belief bias during reasoning among religious believers and skeptics.

    PubMed

    Pennycook, Gordon; Cheyne, James Allan; Koehler, Derek J; Fugelsang, Jonathan A

    2013-08-01

    We provide evidence that religious skeptics, as compared to believers, are both more reflective and effective in logical reasoning tasks. While recent studies have reported a negative association between an analytic cognitive style and religiosity, they focused exclusively on accuracy, making it difficult to specify potential underlying cognitive mechanisms. The present study extends the previous research by assessing both performance and response times on quintessential logical reasoning problems (syllogisms). Those reporting more religious skepticism made fewer reasoning errors than did believers. This finding remained significant after controlling for general cognitive ability, time spent on the problems, and various demographic variables. Crucial for the purpose of exploring underlying mechanisms, response times indicated that skeptics also spent more time reasoning than did believers. This novel finding suggests a possible role of response slowing during analytic problem solving as a component of cognitive style that promotes overriding intuitive first impressions. Implications for using additional processing measures, such as response time, to investigate individual differences in cognitive style are discussed.

  14. Analytical stability criteria for the Caledonian Symmetric Four and Five Body Problems

    NASA Astrophysics Data System (ADS)

    Steves, Bonnie; Shoaib Afridi, Mohammad; Sweatman, Winston

    2017-06-01

    Analytical studies of the stability of three or more body gravitational systems are difficult because of the greater number of variables involved with the increasing number of bodies and the limitation of 10 integrals that exist in the gravitational n-body problem. Utilisation of symmetries or the neglecting of the masses of some of the bodies compared to others can simplify the dynamical problem and enable global analytical stability solutions to be derived. These symmetric and restricted few body systems with their analytical stability criterion can then provide useful information on the stability of the general few body system when near symmetry or the restricted situation. Even with symmetrical reductions, analytical stability derivations for four and five body problems are rare. In this paper, we develop an analytical stability criterion for the Caledonian Symmetric Five Body Problem (CS5BP) , a dynamically symmetrical planar problem with two pairs of equal masses and a fifth mass located at the centre of mass. Sundman’s inequality is applied to derive boundary surfaces to the allowed real motion of the system. This enables the derivation of a stability criterion valid for all time for the hierarchical stability of the CS5BP and its subset the Caledonian Symmetric Four Body Problem (CSFBP), where the central mass is taken to be equal to zero. We show that the hierarchical stability depends solely on the Szebehely constant C0, which is a function of the total energy H and angular momentum c. The critical value Ccrit at which the system becomes hierarchically stable for all time depends only on the two mass ratios of the symmetric five body system. We then explore the effect on the stability of the whole system of adding an increasing massive central body. It is shown both analytically and numerically that all CS5BPs and CSFBPs of different mass ratios are hierarchically stable if C0 > 0.0659 and C0 > 0.0465, respectively. The Caledonian Symmetric Four and Five Body gravitational models are relevant to the study of the stability and evolution of symmetric quadruple/quintuple stellar clusters and symmetric exoplanetary systems of two planets orbiting a binary/triplet of stars.

  15. The inverse problem of sensing the mass and force induced by an adsorbate on a beam nanomechanical resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yun; Zhang, Yin

    2016-06-08

    The mass sensing superiority of a micro/nanomechanical resonator sensor over conventional mass spectrometry has been, or at least, is being firmly established. Because the sensing mechanism of a mechanical resonator sensor is the shifts of resonant frequencies, how to link the shifts of resonant frequencies with the material properties of an analyte formulates an inverse problem. Besides the analyte/adsorbate mass, many other factors such as position and axial force can also cause the shifts of resonant frequencies. The in-situ measurement of the adsorbate position and axial force is extremely difficult if not impossible, especially when an adsorbate is as smallmore » as a molecule or an atom. Extra instruments are also required. In this study, an inverse problem of using three resonant frequencies to determine the mass, position and axial force is formulated and solved. The accuracy of the inverse problem solving method is demonstrated and how the method can be used in the real application of a nanomechanical resonator is also discussed. Solving the inverse problem is helpful to the development and application of mechanical resonator sensor on two things: reducing extra experimental equipments and achieving better mass sensing by considering more factors.« less

  16. Electronic tongue: An analytical gustatory tool

    PubMed Central

    Latha, Rewanthwar Swathi; Lakshmi, P. K.

    2012-01-01

    Taste is an important organoleptic property governing acceptance of products for administration through mouth. But majority of drugs available are bitter in taste. For patient acceptability and compliance, bitter taste drugs are masked by adding several flavoring agents. Thus, taste assessment is one important quality control parameter for evaluating taste-masked formulations. The primary method for the taste measurement of drug substances and formulations is by human panelists. The use of sensory panelists is very difficult and problematic in industry and this is due to the potential toxicity of drugs and subjectivity of taste panelists, problems in recruiting taste panelists, motivation and panel maintenance are significantly difficult when working with unpleasant products. Furthermore, Food and Drug Administration (FDA)-unapproved molecules cannot be tested. Therefore, analytical taste-sensing multichannel sensory system called as electronic tongue (e-tongue or artificial tongue) which can assess taste have been replacing the sensory panelists. Thus, e-tongue includes benefits like reducing reliance on human panel. The present review focuses on the electrochemical concepts in instrumentation, performance qualification of E-tongue, and applications in various fields. PMID:22470887

  17. Building analytical three-field cosmological models

    NASA Astrophysics Data System (ADS)

    Santos, J. R. L.; Moraes, P. H. R. S.; Ferreira, D. A.; Neta, D. C. Vilar

    2018-02-01

    A difficult task to deal with is the analytical treatment of models composed of three real scalar fields, as their equations of motion are in general coupled and hard to integrate. In order to overcome this problem we introduce a methodology to construct three-field models based on the so-called "extension method". The fundamental idea of the procedure is to combine three one-field systems in a non-trivial way, to construct an effective three scalar field model. An interesting scenario where the method can be implemented is with inflationary models, where the Einstein-Hilbert Lagrangian is coupled with the scalar field Lagrangian. We exemplify how a new model constructed from our method can lead to non-trivial behaviors for cosmological parameters.

  18. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  19. Dynamic behaviour of thin composite plates for different boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprintu, Iuliana, E-mail: sprintui@yahoo.com, E-mail: rotaruconstantin@yahoo.com; Rotaru, Constantin, E-mail: sprintui@yahoo.com, E-mail: rotaruconstantin@yahoo.com

    2014-12-10

    In the context of composite materials technology, which is increasingly present in industry, this article covers a topic of great interest and theoretical and practical importance. Given the complex design of fiber-reinforced materials and their heterogeneous nature, mathematical modeling of the mechanical response under different external stresses is very difficult to address in the absence of simplifying assumptions. In most structural applications, composite structures can be idealized as beams, plates, or shells. The analysis is reduced from a three-dimensional elasticity problem to a oneor two-dimensional problem, based on certain simplifying assumptions that can be made because the structure is thin.more » This paper aims to validate a mathematical model illustrating how thin rectangular orthotropic plates respond to the actual load. Thus, from the theory of thin plates, new analytical solutions are proposed corresponding to orthotropic rectangular plates having different boundary conditions. The proposed analytical solutions are considered both for solving equation orthotropic rectangular plates and for modal analysis.« less

  20. Insight and analysis problem solving in microbes to machines.

    PubMed

    Clark, Kevin B

    2015-11-01

    A key feature for obtaining solutions to difficult problems, insight is oftentimes vaguely regarded as a special discontinuous intellectual process and/or a cognitive restructuring of problem representation or goal approach. However, this nearly century-old state of art devised by the Gestalt tradition to explain the non-analytical or non-trial-and-error, goal-seeking aptitude of primate mentality tends to neglect problem-solving capabilities of lower animal phyla, Kingdoms other than Animalia, and advancing smart computational technologies built from biological, artificial, and composite media. Attempting to provide an inclusive, precise definition of insight, two major criteria of insight, discontinuous processing and problem restructuring, are here reframed using terminology and statistical mechanical properties of computational complexity classes. Discontinuous processing becomes abrupt state transitions in algorithmic/heuristic outcomes or in types of algorithms/heuristics executed by agents using classical and/or quantum computational models. And problem restructuring becomes combinatorial reorganization of resources, problem-type substitution, and/or exchange of computational models. With insight bounded by computational complexity, humans, ciliated protozoa, and complex technological networks, for example, show insight when restructuring time requirements, combinatorial complexity, and problem type to solve polynomial and nondeterministic polynomial decision problems. Similar effects are expected from other problem types, supporting the idea that insight might be an epiphenomenon of analytical problem solving and consequently a larger information processing framework. Thus, this computational complexity definition of insight improves the power, external and internal validity, and reliability of operational parameters with which to classify, investigate, and produce the phenomenon for computational agents ranging from microbes to man-made devices. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Evaluation of Visual Analytics Environments: The Road to the Visual Analytics Science and Technology Challenge Evaluation Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean; Plaisant, Catherine; Whiting, Mark A.

    The evaluation of visual analytics environments was a topic in Illuminating the Path [Thomas 2005] as a critical aspect of moving research into practice. For a thorough understanding of the utility of the systems available, evaluation not only involves assessing the visualizations, interactions or data processing algorithms themselves, but also the complex processes that a tool is meant to support (such as exploratory data analysis and reasoning, communication through visualization, or collaborative data analysis [Lam 2012; Carpendale 2007]). Researchers and practitioners in the field have long identified many of the challenges faced when planning, conducting, and executing an evaluation ofmore » a visualization tool or system [Plaisant 2004]. Evaluation is needed to verify that algorithms and software systems work correctly and that they represent improvements over the current infrastructure. Additionally to effectively transfer new software into a working environment, it is necessary to ensure that the software has utility for the end-users and that the software can be incorporated into the end-user’s infrastructure and work practices. Evaluation test beds require datasets, tasks, metrics and evaluation methodologies. As noted in [Thomas 2005] it is difficult and expensive for any one researcher to setup an evaluation test bed so in many cases evaluation is setup for communities of researchers or for various research projects or programs. Examples of successful community evaluations can be found [Chinchor 1993; Voorhees 2007; FRGC 2012]. As visual analytics environments are intended to facilitate the work of human analysts, one aspect of evaluation needs to focus on the utility of the software to the end-user. This requires representative users, representative tasks, and metrics that measure the utility to the end-user. This is even more difficult as now one aspect of the test methodology is access to representative end-users to participate in the evaluation. In many cases the sensitive nature of data and tasks and difficult access to busy analysts puts even more of a burden on researchers to complete this type of evaluation. User-centered design goes beyond evaluation and starts with the user [Beyer 1997, Shneiderman 2009]. Having some knowledge of the type of data, tasks, and work practices helps researchers and developers know the correct paths to pursue in their work. When access to the end-users is problematic at best and impossible at worst, user-centered design becomes difficult. Researchers are unlikely to go to work on the type of problems faced by inaccessible users. Commercial vendors have difficulties evaluating and improving their products when they cannot observe real users working with their products. In well-established fields such as web site design or office software design, user-interface guidelines have been developed based on the results of empirical studies or the experience of experts. Guidelines can speed up the design process and replace some of the need for observation of actual users [heuristics review references]. In 2006 when the visual analytics community was initially getting organized, no such guidelines existed. Therefore, we were faced with the problem of developing an evaluation framework for the field of visual analytics that would provide representative situations and datasets, representative tasks and utility metrics, and finally a test methodology which would include a surrogate for representative users, increase interest in conducting research in the field, and provide sufficient feedback to the researchers so that they could improve their systems.« less

  2. Potential-splitting approach applied to the Temkin-Poet model for electron scattering off the hydrogen atom and the helium ion

    NASA Astrophysics Data System (ADS)

    Yarevsky, E.; Yakovlev, S. L.; Larson, Å; Elander, N.

    2015-06-01

    The study of scattering processes in few body systems is a difficult problem especially if long range interactions are involved. In order to solve such problems, we develop here a potential-splitting approach for three-body systems. This approach is based on splitting the reaction potential into a finite range core part and a long range tail part. The solution to the Schrödinger equation for the long range tail Hamiltonian is found analytically, and used as an incoming wave in the three body scattering problem. This reformulation of the scattering problem makes it suitable for treatment by the exterior complex scaling technique in the sense that the problem after the complex dilation is reduced to a boundary value problem with zero boundary conditions. We illustrate the method with calculations on the electron scattering off the hydrogen atom and the positive helium ion in the frame of the Temkin-Poet model.

  3. Green's function of radial inhomogeneous spheres excited by internal sources.

    PubMed

    Zouros, Grigorios P; Kokkorakis, Gerassimos C

    2011-01-01

    Green's function in the interior of penetrable bodies with inhomogeneous compressibility by sources placed inside them is evaluated through a Schwinger-Lippmann volume integral equation. In the case of a radial inhomogeneous sphere, the radial part of the unknown Green's function can be expanded in a double Dini's series, which allows analytical evaluation of the involved cumbersome integrals. The simple case treated here can be extended to more difficult situations involving inhomogeneous density as well as to the corresponding electromagnetic or elastic problem. Finally, numerical results are given for various inhomogeneous compressibility distributions.

  4. Unstructured grids on SIMD torus machines

    NASA Technical Reports Server (NTRS)

    Bjorstad, Petter E.; Schreiber, Robert

    1994-01-01

    Unstructured grids lead to unstructured communication on distributed memory parallel computers, a problem that has been considered difficult. Here, we consider adaptive, offline communication routing for a SIMD processor grid. Our approach is empirical. We use large data sets drawn from supercomputing applications instead of an analytic model of communication load. The chief contribution of this paper is an experimental demonstration of the effectiveness of certain routing heuristics. Our routing algorithm is adaptive, nonminimal, and is generally designed to exploit locality. We have a parallel implementation of the router, and we report on its performance.

  5. A Combined Experimental and Analytical Modeling Approach to Understanding Friction Stir Welding

    NASA Technical Reports Server (NTRS)

    Nunes, Arthur C., Jr.; Stewart, Michael B.; Adams, Glynn P.; Romine, Peter

    1998-01-01

    In the Friction Stir Welding (FSW) process a rotating pin tool joins the sides of a seam by stirring them together. This solid state welding process avoids problems with melting and hot-shortness presented by some difficult-to weld high-performance light alloys. The details of the plastic flow during the process are not well understood and are currently a subject of research. Two candidate models of the FSW process, the Mixed Zone (MZ) and the Single Slip Surface (S3) model are presented and their predictions compared to experimental data.

  6. Integrating laboratory robots with analytical instruments--must it really be so difficult?

    PubMed

    Kramer, G W

    1990-09-01

    Creating a reliable system from discrete laboratory instruments is often a task fraught with difficulties. While many modern analytical instruments are marvels of detection and data handling, attempts to create automated analytical systems incorporating such instruments are often frustrated by their human-oriented control structures and their egocentricity. The laboratory robot, while fully susceptible to these problems, extends such compatibility issues to the physical dimensions involving sample interchange, manipulation, and event timing. The workcell concept was conceived to describe the procedure and equipment necessary to carry out a single task during sample preparation. This notion can be extended to organize all operations in an automated system. Each workcell, no matter how complex its local repertoire of functions, must be minimally capable of accepting information (commands, data), returning information on demand (status, results), and being started, stopped, and reset by a higher level device. Even the system controller should have a mode where it can be directed by instructions from a higher level.

  7. Laser ablation of iron-rich black films from exposed granite surfaces

    NASA Astrophysics Data System (ADS)

    Delgado Rodrigues, J.; Costa, D.; Mascalchi, M.; Osticioli, I.; Siano, S.

    2014-10-01

    Here, we investigated the potential of laser removal of iron-rich dark films from weathered granite substrates, which represents a very difficult conservation problem because of the polymineralic nature of the stone and of its complex deterioration mechanisms. As often occurs, biotite was the most critical component because of its high optical absorption, low melting temperature, and pronounced cleavage, which required a careful control of the photothermal and photomechanical effects to optimize the selective ablation of the mentioned unwanted dark film. Different pulse durations and wavelengths Nd:YAG lasers were tested and optimal irradiation conditions were determined through thorough analytical characterisations. Besides addressing a specific conservation problem, the present work provides information of general valence in laser uncovering of encrusted granite.

  8. A statistical theory for sound radiation and reflection from a duct

    NASA Technical Reports Server (NTRS)

    Cho, Y. C.

    1979-01-01

    A new analytical method is introduced for the study of the sound radiation and reflection from the open end of a duct. The sound is thought of as an aggregation of the quasiparticles-phonons. The motion of the latter is described in terms of the statistical distribution, which is derived from the classical wave theory. The results are in good agreement with the solutions obtained using the Wiener-Hopf technique when the latter is applicable, but the new method is simple and provides straightforward physical interpretation of the problem. Furthermore, it is applicable to a problem involving a duct in which modes are difficult to determine or cannot be defined at all, whereas the Wiener-Hopf technique is not.

  9. Learning the inverse kinetics of an octopus-like manipulator in three-dimensional space.

    PubMed

    Giorelli, M; Renda, F; Calisti, M; Arienti, A; Ferri, G; Laschi, C

    2015-05-13

    This work addresses the inverse kinematics problem of a bioinspired octopus-like manipulator moving in three-dimensional space. The bioinspired manipulator has a conical soft structure that confers the ability of twirling around objects as a real octopus arm does. Despite the simple design, the soft conical shape manipulator driven by cables is described by nonlinear differential equations, which are difficult to solve analytically. Since exact solutions of the equations are not available, the Jacobian matrix cannot be calculated analytically and the classical iterative methods cannot be used. To overcome the intrinsic problems of methods based on the Jacobian matrix, this paper proposes a neural network learning the inverse kinematics of a soft octopus-like manipulator driven by cables. After the learning phase, a feed-forward neural network is able to represent the relation between manipulator tip positions and forces applied to the cables. Experimental results show that a desired tip position can be achieved in a short time, since heavy computations are avoided, with a degree of accuracy of 8% relative average error with respect to the total arm length.

  10. Assessment of relative accuracy in the determination of organic matter concentrations in aquatic systems

    USGS Publications Warehouse

    Aiken, G.; Kaplan, L.A.; Weishaar, J.

    2002-01-01

    Accurate determinations of total (TOC), dissolved (DOC) and particulate (POC) organic carbon concentrations are critical for understanding the geochemical, environmental, and ecological roles of aquatic organic matter. Of particular significance for the drinking water industry, TOC measurements are the basis for compliance with US EPA regulations. The results of an interlaboratory comparison designed to identify problems associated with the determination of organic matter concentrations in drinking water supplies are presented. The study involved 31 laboratories and a variety of commercially available analytical instruments. All participating laboratories performed well on samples of potassium hydrogen phthalate (KHP), a compound commonly used as a standard in carbon analysis. However, problems associated with the oxidation of difficult to oxidize compounds, such as dodecylbenzene sulfonic acid and caffeine, were noted. Humic substances posed fewer problems for analysts. Particulate organic matter (POM) in the form of polystyrene beads, freeze-dried bacteria and pulverized leaf material were the most difficult for all analysts, with a wide range of performances reported. The POM results indicate that the methods surveyed in this study are inappropriate for the accurate determination of POC and TOC concentration. Finally, several analysts had difficulty in efficiently separating inorganic carbon from KHP solutions, thereby biasing DOC results.

  11. Combined structures-controls optimization of lattice trusses

    NASA Technical Reports Server (NTRS)

    Balakrishnan, A. V.

    1991-01-01

    The role that distributed parameter model can play in CSI is demonstrated, in particular in combined structures controls optimization problems of importance in preliminary design. Closed form solutions can be obtained for performance criteria such as rms attitude error, making possible analytical solutions of the optimization problem. This is in contrast to the need for numerical computer solution involving the inversion of large matrices in traditional finite element model (FEM) use. Another advantage of the analytic solution is that it can provide much needed insight into phenomena that can otherwise be obscured or difficult to discern from numerical computer results. As a compromise in level of complexity between a toy lab model and a real space structure, the lattice truss used in the EPS (Earth Pointing Satellite) was chosen. The optimization problem chosen is a generic one: of minimizing the structure mass subject to a specified stability margin and to a specified upper bond on the rms attitude error, using a co-located controller and sensors. Standard FEM treating each bar as a truss element is used, while the continuum model is anisotropic Timoshenko beam model. Performance criteria are derived for each model, except that for the distributed parameter model, explicit closed form solutions was obtained. Numerical results obtained by the two model show complete agreement.

  12. A framework for parallelized efficient global optimization with application to vehicle crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Hamza, Karim; Shalaby, Mohamed

    2014-09-01

    This article presents a framework for simulation-based design optimization of computationally expensive problems, where economizing the generation of sample designs is highly desirable. One popular approach for such problems is efficient global optimization (EGO), where an initial set of design samples is used to construct a kriging model, which is then used to generate new 'infill' sample designs at regions of the search space where there is high expectancy of improvement. This article attempts to address one of the limitations of EGO, where generation of infill samples can become a difficult optimization problem in its own right, as well as allow the generation of multiple samples at a time in order to take advantage of parallel computing in the evaluation of the new samples. The proposed approach is tested on analytical functions, and then applied to the vehicle crashworthiness design of a full Geo Metro model undergoing frontal crash conditions.

  13. Viscoelastic study of an adhesively bonded joint

    NASA Technical Reports Server (NTRS)

    Joseph, P. F.

    1983-01-01

    The plane strain problem of two dissimilar orthotropic plates bonded with an isotropic, linearly viscoelastic adhesive is considered. Both the shear and the normal stresses in the adhesive are calculated for various geometries and loading conditions. Transverse shear deformations of the adherends are taken into account, and their effect on the solution is shown in the results. All three inplane strains of the adhesive are included. Attention is given to the effect of temperature, both in the adhesive joint problem and to the heat generation in a viscoelastic material under cyclic loading. This separate study is included because heat generation and or spatially varying temperature are at present too difficult to account for in the analytical solution of the bonded joint, but whose effect can not be ignored in design.

  14. Surface charge method for molecular surfaces with curved areal elements I. Spherical triangles

    NASA Astrophysics Data System (ADS)

    Yu, Yi-Kuo

    2018-03-01

    Parametrizing a curved surface with flat triangles in electrostatics problems creates a diverging electric field. One way to avoid this is to have curved areal elements. However, charge density integration over curved patches appears difficult. This paper, dealing with spherical triangles, is the first in a series aiming to solve this problem. Here, we lay the ground work for employing curved patches for applying the surface charge method to electrostatics. We show analytically how one may control the accuracy by expanding in powers of the the arc length (multiplied by the curvature). To accommodate not extremely small curved areal elements, we have provided enough details to include higher order corrections that are needed for better accuracy when slightly larger surface elements are used.

  15. Direct trace-elemental analysis of urine samples by laser ablation-inductively coupled plasma mass spectrometry after sample deposition on clinical filter papers.

    PubMed

    Aramendía, Maite; Rello, Luis; Vanhaecke, Frank; Resano, Martín

    2012-10-16

    Collection of biological fluids on clinical filter papers shows important advantages from a logistic point of view, although analysis of these specimens is far from straightforward. Concerning urine analysis, and particularly when direct trace elemental analysis by laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS) is aimed at, several problems arise, such as lack of sensitivity or different distribution of the analytes on the filter paper, rendering obtaining reliable quantitative results quite difficult. In this paper, a novel approach for urine collection is proposed, which circumvents many of these problems. This methodology consists on the use of precut filter paper discs where large amounts of sample can be retained upon a single deposition. This provides higher amounts of the target analytes and, thus, sufficient sensitivity, and allows addition of an adequate internal standard at the clinical lab prior to analysis, therefore making it suitable for a strategy based on unsupervised sample collection and ulterior analysis at referral centers. On the basis of this sampling methodology, an analytical method was developed for the direct determination of several elements in urine (Be, Bi, Cd, Co, Cu, Ni, Sb, Sn, Tl, Pb, and V) at the low μg L(-1) level by means of LA-ICPMS. The method developed provides good results in terms of accuracy and LODs (≤1 μg L(-1) for most of the analytes tested), with a precision in the range of 15%, fit-for-purpose for clinical control analysis.

  16. Mapping Systemic Risk: Critical Degree and Failures Distribution in Financial Networks.

    PubMed

    Smerlak, Matteo; Stoll, Brady; Gupta, Agam; Magdanz, James S

    2015-01-01

    The financial crisis illustrated the need for a functional understanding of systemic risk in strongly interconnected financial structures. Dynamic processes on complex networks being intrinsically difficult to model analytically, most recent studies of this problem have relied on numerical simulations. Here we report analytical results in a network model of interbank lending based on directly relevant financial parameters, such as interest rates and leverage ratios. We obtain a closed-form formula for the "critical degree" (the number of creditors per bank below which an individual shock can propagate throughout the network), and relate failures distributions to network topologies, in particular scalefree ones. Our criterion for the onset of contagion turns out to be isomorphic to the condition for cooperation to evolve on graphs and social networks, as recently formulated in evolutionary game theory. This remarkable connection supports recent calls for a methodological rapprochement between finance and ecology.

  17. Mapping Systemic Risk: Critical Degree and Failures Distribution in Financial Networks

    PubMed Central

    Smerlak, Matteo; Stoll, Brady; Gupta, Agam; Magdanz, James S.

    2015-01-01

    The financial crisis illustrated the need for a functional understanding of systemic risk in strongly interconnected financial structures. Dynamic processes on complex networks being intrinsically difficult to model analytically, most recent studies of this problem have relied on numerical simulations. Here we report analytical results in a network model of interbank lending based on directly relevant financial parameters, such as interest rates and leverage ratios. We obtain a closed-form formula for the “critical degree” (the number of creditors per bank below which an individual shock can propagate throughout the network), and relate failures distributions to network topologies, in particular scalefree ones. Our criterion for the onset of contagion turns out to be isomorphic to the condition for cooperation to evolve on graphs and social networks, as recently formulated in evolutionary game theory. This remarkable connection supports recent calls for a methodological rapprochement between finance and ecology. PMID:26207631

  18. The influence of retrieval practice on metacognition: The contribution of analytic and non-analytic processes.

    PubMed

    Miller, Tyler M; Geraci, Lisa

    2016-05-01

    People may change their memory predictions after retrieval practice using naïve theories of memory and/or by using subjective experience - analytic and non-analytic processes respectively. The current studies disentangled contributions of each process. In one condition, learners studied paired-associates, made a memory prediction, completed a short-run of retrieval practice and made a second prediction. In another condition, judges read about a yoked learners' retrieval practice performance but did not participate in retrieval practice and therefore, could not use non-analytic processes for the second prediction. In Study 1, learners reduced their predictions following moderately difficult retrieval practice whereas judges increased their predictions. In Study 2, learners made lower adjusted predictions than judges following both easy and difficult retrieval practice. In Study 3, judge-like participants used analytic processes to report adjusted predictions. Overall, the results suggested non-analytic processes play a key role for participants to reduce their predictions after retrieval practice. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Advances in spectroscopic methods for quantifying soil carbon

    USGS Publications Warehouse

    Reeves, James B.; McCarty, Gregory W.; Calderon, Francisco; Hively, W. Dean

    2012-01-01

    The current gold standard for soil carbon (C) determination is elemental C analysis using dry combustion. However, this method requires expensive consumables, is limited by the number of samples that can be processed (~100/d), and is restricted to the determination of total carbon. With increased interest in soil C sequestration, faster methods of analysis are needed, and there is growing interest in methods based on diffuse reflectance spectroscopy in the visible, near-infrared or mid-infrared spectral ranges. These spectral methods can decrease analytical requirements and speed sample processing, be applied to large landscape areas using remote sensing imagery, and be used to predict multiple analytes simultaneously. However, the methods require localized calibrations to establish the relationship between spectral data and reference analytical data, and also have additional, specific problems. For example, remote sensing is capable of scanning entire watersheds for soil carbon content but is limited to the surface layer of tilled soils and may require difficult and extensive field sampling to obtain proper localized calibration reference values. The objective of this chapter is to discuss the present state of spectroscopic methods for determination of soil carbon.

  20. Strength conditions for the elastic structures with a stress error

    NASA Astrophysics Data System (ADS)

    Matveev, A. D.

    2017-10-01

    As is known, the constraints (strength conditions) for the safety factor of elastic structures and design details of a particular class, e.g. aviation structures are established, i.e. the safety factor values of such structures should be within the given range. It should be noted that the constraints are set for the safety factors corresponding to analytical (exact) solutions of elasticity problems represented for the structures. Developing the analytical solutions for most structures, especially irregular shape ones, is associated with great difficulties. Approximate approaches to solve the elasticity problems, e.g. the technical theories of deformation of homogeneous and composite plates, beams and shells, are widely used for a great number of structures. Technical theories based on the hypotheses give rise to approximate (technical) solutions with an irreducible error, with the exact value being difficult to be determined. In static calculations of the structural strength with a specified small range for the safety factors application of technical (by the Theory of Strength of Materials) solutions is difficult. However, there are some numerical methods for developing the approximate solutions of elasticity problems with arbitrarily small errors. In present paper, the adjusted reference (specified) strength conditions for the structural safety factor corresponding to approximate solution of the elasticity problem have been proposed. The stress error estimation is taken into account using the proposed strength conditions. It has been shown that, to fulfill the specified strength conditions for the safety factor of the given structure corresponding to an exact solution, the adjusted strength conditions for the structural safety factor corresponding to an approximate solution are required. The stress error estimation which is the basis for developing the adjusted strength conditions has been determined for the specified strength conditions. The adjusted strength conditions presented by allowable stresses are suggested. Adjusted strength conditions make it possible to determine the set of approximate solutions, whereby meeting the specified strength conditions. Some examples of the specified strength conditions to be satisfied using the technical (by the Theory of Strength of Materials) solutions and strength conditions have been given, as well as the examples of stress conditions to be satisfied using approximate solutions with a small error.

  1. REDUCING AMBIGUITY IN THE FUNCTIONAL ASSESSMENT OF PROBLEM BEHAVIOR

    PubMed Central

    Rooker, Griffin W.; DeLeon, Iser G.; Borrero, Carrie S. W.; Frank-Crawford, Michelle A.; Roscoe, Eileen M.

    2015-01-01

    Severe problem behavior (e.g., self-injury and aggression) remains among the most serious challenges for the habilitation of persons with intellectual disabilities and is a significant obstacle to community integration. The current standard of behavior analytic treatment for problem behavior in this population consists of a functional assessment and treatment model. Within that model, the first step is to assess the behavior–environment relations that give rise to and maintain problem behavior, a functional behavioral assessment. Conventional methods of assessing behavioral function include indirect, descriptive, and experimental assessments of problem behavior. Clinical investigators have produced a rich literature demonstrating the relative effectiveness for each method, but in clinical practice, each can produce ambiguous or difficult-to-interpret outcomes that may impede treatment development. This paper outlines potential sources of variability in assessment outcomes and then reviews the evidence on strategies for avoiding ambiguous outcomes and/or clarifying initially ambiguous results. The end result for each assessment method is a set of best practice guidelines, given the available evidence, for conducting the initial assessment. PMID:26236145

  2. Recovery of time-dependent volatility in option pricing model

    NASA Astrophysics Data System (ADS)

    Deng, Zui-Cha; Hon, Y. C.; Isakov, V.

    2016-11-01

    In this paper we investigate an inverse problem of determining the time-dependent volatility from observed market prices of options with different strikes. Due to the non linearity and sparsity of observations, an analytical solution to the problem is generally not available. Numerical approximation is also difficult to obtain using most of the existing numerical algorithms. Based on our recent theoretical results, we apply the linearisation technique to convert the problem into an inverse source problem from which recovery of the unknown volatility function can be achieved. Two kinds of strategies, namely, the integral equation method and the Landweber iterations, are adopted to obtain the stable numerical solution to the inverse problem. Both theoretical analysis and numerical examples confirm that the proposed approaches are effective. The work described in this paper was partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region (Project No. CityU 101112) and grants from the NNSF of China (Nos. 11261029, 11461039), and NSF grants DMS 10-08902 and 15-14886 and by Emylou Keith and Betty Dutcher Distinguished Professorship at the Wichita State University (USA).

  3. Fast alternative Monte Carlo formalism for a class of problems in biophotonics

    NASA Astrophysics Data System (ADS)

    Miller, Steven D.

    1997-12-01

    A practical and effective, alternative Monte Carlo formalism is presented that rapidly finds flux solutions to the radiative transport equation for a class of problems in biophotonics; namely, wide-beam irradiance of finite, optically anisotropic homogeneous or heterogeneous biomedias, which both strongly scatter and absorb light. Such biomedias include liver, tumors, blood, or highly blood perfused tissues. As Fermat rays comprising a wide coherent (laser) beam enter the tissue, they evolve into a bundle of random optical paths or trajectories due to scattering. Overall, this can be physically interpreted as a bundle of Markov trajectories traced out by a 'gas' of Brownian-like point photons being successively scattered and absorbed. By considering the cumulative flow of a statistical bundle of trajectories through interior data planes, the effective equivalent information of the (generally unknown) analytical flux solutions of the transfer equation rapidly emerges. Unlike the standard Monte Carlo techniques, which evaluate scalar fluence, this technique is faster, more efficient, and simpler to apply for this specific class of optical situations. Other analytical or numerical techniques can either become unwieldy or lack viability or are simply more difficult to apply. Illustrative flux calculations are presented for liver, blood, and tissue-tumor-tissue systems.

  4. Solving the three-body Coulomb breakup problem using exterior complex scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCurdy, C.W.; Baertschy, M.; Rescigno, T.N.

    2004-05-17

    Electron-impact ionization of the hydrogen atom is the prototypical three-body Coulomb breakup problem in quantum mechanics. The combination of subtle correlation effects and the difficult boundary conditions required to describe two electrons in the continuum have made this one of the outstanding challenges of atomic physics. A complete solution of this problem in the form of a ''reduction to computation'' of all aspects of the physics is given by the application of exterior complex scaling, a modern variant of the mathematical tool of analytic continuation of the electronic coordinates into the complex plane that was used historically to establish themore » formal analytic properties of the scattering matrix. This review first discusses the essential difficulties of the three-body Coulomb breakup problem in quantum mechanics. It then describes the formal basis of exterior complex scaling of electronic coordinates as well as the details of its numerical implementation using a variety of methods including finite difference, finite elements, discrete variable representations, and B-splines. Given these numerical implementations of exterior complex scaling, the scattering wave function can be generated with arbitrary accuracy on any finite volume in the space of electronic coordinates, but there remains the fundamental problem of extracting the breakup amplitudes from it. Methods are described for evaluating these amplitudes. The question of the volume-dependent overall phase that appears in the formal theory of ionization is resolved. A summary is presented of accurate results that have been obtained for the case of electron-impact ionization of hydrogen as well as a discussion of applications to the double photoionization of helium.« less

  5. Signals: Applying Academic Analytics

    ERIC Educational Resources Information Center

    Arnold, Kimberly E.

    2010-01-01

    Academic analytics helps address the public's desire for institutional accountability with regard to student success, given the widespread concern over the cost of higher education and the difficult economic and budgetary conditions prevailing worldwide. Purdue University's Signals project applies the principles of analytics widely used in…

  6. Building the analytical response in frequency domain of AC biased bolometers. Application to Planck/HFI

    NASA Astrophysics Data System (ADS)

    Sauvé, Alexandre; Montier, Ludovic

    2016-12-01

    Context: Bolometers are high sensitivity detector commonly used in Infrared astronomy. The HFI instrument of the Planck satellite makes extensive use of them, but after the satellite launch two electronic related problems revealed critical. First an unexpected excess response of detectors at low optical excitation frequency for ν < 1 Hz, and secondly the Analog To digital Converter (ADC) component had been insufficiently characterized on-ground. These two problems require an exquisite knowledge of detector response. However bolometers have highly nonlinear characteristics, coming from their electrical and thermal coupling making them very difficult to model. Goal: We present a method to build the analytical transfer function in frequency domain which describe the voltage response of an Alternative Current (AC) biased bolometer to optical excitation, based on the standard bolometer model. This model is built using the setup of the Planck/HFI instrument and offers the major improvement of being based on a physical model rather than the currently in use had-hoc model based on Direct Current (DC) bolometer theory. Method: The analytical transfer function expression will be presented in matrix form. For this purpose, we build linearized versions of the bolometer electro thermal equilibrium. A custom description of signals in frequency is used to solve the problem with linear algebra. The model performances is validated using time domain simulations. Results: The provided expression is suitable for calibration and data processing. It can also be used to provide constraints for fitting optical transfer function using real data from steady state electronic response and optical response. The accurate description of electronic response can also be used to improve the ADC nonlinearity correction for quickly varying optical signals.

  7. Generators of dynamical symmetries and the correct gauge transformation in the Landau level problem: use of pseudomomentum and pseudo-angular momentum

    NASA Astrophysics Data System (ADS)

    Konstantinou, Georgios; Moulopoulos, Konstantinos

    2016-11-01

    Due to the importance of gauge symmetry in all fields of physics, and motivated by an article written almost three decades ago that warns against a naive handling of gauge transformations in the Landau level problem (a quantum electron moving in a spatially uniform magnetic field), we point out a proper use of the generators of dynamical symmetries combined with gauge transformation methods to easily obtain exact analytical solutions for all Landau level-wavefunctions in arbitrary gauge. Our method is different from the old argument and provides solutions in an easier manner and in a broader set of geometries and gauges; in so doing, it eliminates the need for extra procedures (i.e. a change of basis) pointed out as a necessary step in the old literature, and gives back the standard simple result, provided that an appropriate use is made of the dynamical symmetries of the system and their generators. In this way the present work will at least be useful for university-level education, i.e. in advanced classes in quantum mechanics and condensed matter physics. In addition, it clarifies the actual role of the gauge in the Landau level problem, which often appears confusing in the usual derivations provided in textbooks. Finally, we go further by showing that a similar methodology can be made to apply to the more difficult case of a spatially non-uniform magnetic field (where closed analytical results are rare), in which case the various generators (pseudomomentum and pseudo-angular momentum) appear as line integrals of the inhomogeneous magnetic field; we give closed analytical solutions for all cases, and show how the old and rather forgotten Bawin-Burnel gauge shows up naturally as a ‘reference gauge’ in all solutions.

  8. Recursive linearization of multibody dynamics equations of motion

    NASA Technical Reports Server (NTRS)

    Lin, Tsung-Chieh; Yae, K. Harold

    1989-01-01

    The equations of motion of a multibody system are nonlinear in nature, and thus pose a difficult problem in linear control design. One approach is to have a first-order approximation through the numerical perturbations at a given configuration, and to design a control law based on the linearized model. Here, a linearized model is generated analytically by following the footsteps of the recursive derivation of the equations of motion. The equations of motion are first written in a Newton-Euler form, which is systematic and easy to construct; then, they are transformed into a relative coordinate representation, which is more efficient in computation. A new computational method for linearization is obtained by applying a series of first-order analytical approximations to the recursive kinematic relationships. The method has proved to be computationally more efficient because of its recursive nature. It has also turned out to be more accurate because of the fact that analytical perturbation circumvents numerical differentiation and other associated numerical operations that may accumulate computational error, thus requiring only analytical operations of matrices and vectors. The power of the proposed linearization algorithm is demonstrated, in comparison to a numerical perturbation method, with a two-link manipulator and a seven degrees of freedom robotic manipulator. Its application to control design is also demonstrated.

  9. General statistical considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eberhardt, L L; Gilbert, R O

    From NAEG plutonium environmental studies program meeting; Las Vegas, Nevada, USA (2 Oct 1973). The high sampling variability encountered in environmental plutonium studies along with high analytical costs makes it very important that efficient soil sampling plans be used. However, efficient sampling depends on explicit and simple statements of the objectives of the study. When there are multiple objectives it may be difficult to devise a wholly suitable sampling scheme. Sampling for long-term changes in plutonium concentration in soils may also be complex and expensive. Further attention to problems associated with compositing samples is recommended, as is the consistent usemore » of random sampling as a basic technique. (auth)« less

  10. Measurement and Visualization of Mass Transport for the Flowing Atmospheric Pressure Afterglow (FAPA) Ambient Mass-Spectrometry Source

    PubMed Central

    Pfeuffer, Kevin P.; Ray, Steven J.; Hieftje, Gary M.

    2014-01-01

    Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last nine years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification due to the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass-spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet. PMID:24658804

  11. Measurement and visualization of mass transport for the flowing atmospheric pressure afterglow (FAPA) ambient mass-spectrometry source.

    PubMed

    Pfeuffer, Kevin P; Ray, Steven J; Hieftje, Gary M

    2014-05-01

    Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last 9 years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification because of the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet.

  12. Measurement and Visualization of Mass Transport for the Flowing Atmospheric Pressure Afterglow (FAPA) Ambient Mass-Spectrometry Source

    NASA Astrophysics Data System (ADS)

    Pfeuffer, Kevin P.; Ray, Steven J.; Hieftje, Gary M.

    2014-05-01

    Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last 9 years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification because of the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet.

  13. Modelling vortex-induced fluid-structure interaction.

    PubMed

    Benaroya, Haym; Gabbai, Rene D

    2008-04-13

    The principal goal of this research is developing physics-based, reduced-order, analytical models of nonlinear fluid-structure interactions associated with offshore structures. Our primary focus is to generalize the Hamilton's variational framework so that systems of flow-oscillator equations can be derived from first principles. This is an extension of earlier work that led to a single energy equation describing the fluid-structure interaction. It is demonstrated here that flow-oscillator models are a subclass of the general, physical-based framework. A flow-oscillator model is a reduced-order mechanical model, generally comprising two mechanical oscillators, one modelling the structural oscillation and the other a nonlinear oscillator representing the fluid behaviour coupled to the structural motion.Reduced-order analytical model development continues to be carried out using a Hamilton's principle-based variational approach. This provides flexibility in the long run for generalizing the modelling paradigm to complex, three-dimensional problems with multiple degrees of freedom, although such extension is very difficult. As both experimental and analytical capabilities advance, the critical research path to developing and implementing fluid-structure interaction models entails-formulating generalized equations of motion, as a superset of the flow-oscillator models; and-developing experimentally derived, semi-analytical functions to describe key terms in the governing equations of motion. The developed variational approach yields a system of governing equations. This will allow modelling of multiple d.f. systems. The extensions derived generalize the Hamilton's variational formulation for such problems. The Navier-Stokes equations are derived and coupled to the structural oscillator. This general model has been shown to be a superset of the flow-oscillator model. Based on different assumptions, one can derive a variety of flow-oscillator models.

  14. Teaching Cell Biology in the Large-Enrollment Classroom: Methods to Promote Analytical Thinking and Assessment of Their Effectiveness

    PubMed Central

    Kitchen, Elizabeth; Bell, John D.; Reeve, Suzanne; Sudweeks, Richard R.; Bradshaw, William S.

    2003-01-01

    A large-enrollment, undergraduate cellular biology lecture course is described whose primary goal is to help students acquire skill in the interpretation of experimental data. The premise is that this kind of analytical reasoning is not intuitive for most people and, in the absence of hands-on laboratory experience, will not readily develop unless instructional methods and examinations specifically designed to foster it are employed. Promoting scientific thinking forces changes in the roles of both teacher and student. We describe didactic strategies that include directed practice of data analysis in a workshop format, active learning through verbal and written communication, visualization of abstractions diagrammatically, and the use of ancillary small-group mentoring sessions with faculty. The implications for a teacher in reducing the breadth and depth of coverage, becoming coach instead of lecturer, and helping students to diagnose cognitive weaknesses are discussed. In order to determine the efficacy of these strategies, we have carefully monitored student performance and have demonstrated a large gain in a pre- and posttest comparison of scores on identical problems, improved test scores on several successive midterm examinations when the statistical analysis accounts for the relative difficulty of the problems, and higher scores in comparison to students in a control course whose objective was information transfer, not acquisition of reasoning skills. A novel analytical index (student mobility profile) is described that demonstrates that this improvement was not random, but a systematic outcome of the teaching/learning strategies employed. An assessment of attitudes showed that, in spite of finding it difficult, students endorse this approach to learning, but also favor curricular changes that would introduce an analytical emphasis earlier in their training. PMID:14506506

  15. Root finding in the complex plane for seismo-acoustic propagation scenarios with Green's function solutions.

    PubMed

    McCollom, Brittany A; Collis, Jon M

    2014-09-01

    A normal mode solution to the ocean acoustic problem of the Pekeris waveguide with an elastic bottom using a Green's function formulation for a compressional wave point source is considered. Analytic solutions to these types of waveguide propagation problems are strongly dependent on the eigenvalues of the problem; these eigenvalues represent horizontal wavenumbers, corresponding to propagating modes of energy. The eigenvalues arise as singularities in the inverse Hankel transform integral and are specified by roots to a characteristic equation. These roots manifest themselves as poles in the inverse transform integral and can be both subtle and difficult to determine. Following methods previously developed [S. Ivansson et al., J. Sound Vib. 161 (1993)], a root finding routine has been implemented using the argument principle. Using the roots to the characteristic equation in the Green's function formulation, full-field solutions are calculated for scenarios where an acoustic source lies in either the water column or elastic half space. Solutions are benchmarked against laboratory data and existing numerical solutions.

  16. Perspectives on the geographic stability and mobility of people in cities

    PubMed Central

    Hanson, Susan

    2005-01-01

    A class of questions in the human environment sciences focuses on the relationship between individual or household behavior and local geographic context. Central to these questions is the nature of people's geographic mobility as well as the duration of their locational stability at varying spatial and temporal scales. The problem for researchers is that the processes of mobility/stability are temporally and spatially dynamic and therefore difficult to measure. Whereas time and space are continuous, analysts must select levels of aggregation for both length of time in place and spatial scale of place that fit with the problem in question. Previous work has emphasized mobility and suppressed stability as an analytic category. I focus here on stability and show how analyzing individuals' stability requires also analyzing their mobility. Through an empirical example centered on the relationship between entrepreneurship and place, I demonstrate how a spotlight on stability illuminates a resolution to the measurement problem by highlighting the interdependence between the time and space dimensions of stability/mobility. PMID:16230616

  17. Probabilistic data integration and computational complexity

    NASA Astrophysics Data System (ADS)

    Hansen, T. M.; Cordua, K. S.; Mosegaard, K.

    2016-12-01

    Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under-estimation of uncertainty. However, in both examples, one can also analyze the performance of the sampling methods used to solve the data integration problem to indicate the existence of biased information. This can be used actively to avoid biases in the available information and subsequently in the final uncertainty evaluation.

  18. Validation of a finite element method framework for cardiac mechanics applications

    NASA Astrophysics Data System (ADS)

    Danan, David; Le Rolle, Virginie; Hubert, Arnaud; Galli, Elena; Bernard, Anne; Donal, Erwan; Hernández, Alfredo I.

    2017-11-01

    Modeling cardiac mechanics is a particularly challenging task, mainly because of the poor understanding of the underlying physiology, the lack of observability and the complexity of the mechanical properties of myocardial tissues. The choice of cardiac mechanic solvers, especially, implies several difficulties, notably due to the potential instability arising from the nonlinearities inherent to the large deformation framework. Furthermore, the verification of the obtained simulations is a difficult task because there is no analytic solutions for these kinds of problems. Hence, the objective of this work is to provide a quantitative verification of a cardiac mechanics implementation based on two published benchmark problems. The first problem consists in deforming a bar whereas the second problem concerns the inflation of a truncated ellipsoid-shaped ventricle, both in the steady state case. Simulations were obtained by using the finite element software GETFEM++. Results were compared to the consensus solution published by 11 groups and the proposed solutions were indistinguishable. The validation of the proposed mechanical model implementation is an important step toward the proposition of a global model of cardiac electro-mechanical activity.

  19. Singular perturbation analysis of AOTV-related trajectory optimization problems

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Bae, Gyoung H.

    1990-01-01

    The problem of real time guidance and optimal control of Aeroassisted Orbit Transfer Vehicles (AOTV's) was addressed using singular perturbation theory as an underlying method of analysis. Trajectories were optimized with the objective of minimum energy expenditure in the atmospheric phase of the maneuver. Two major problem areas were addressed: optimal reentry, and synergetic plane change with aeroglide. For the reentry problem, several reduced order models were analyzed with the objective of optimal changes in heading with minimum energy loss. It was demonstrated that a further model order reduction to a single state model is possible through the application of singular perturbation theory. The optimal solution for the reduced problem defines an optimal altitude profile dependent on the current energy level of the vehicle. A separate boundary layer analysis is used to account for altitude and flight path angle dynamics, and to obtain lift and bank angle control solutions. By considering alternative approximations to solve the boundary layer problem, three guidance laws were derived, each having an analytic feedback form. The guidance laws were evaluated using a Maneuvering Reentry Research Vehicle model and all three laws were found to be near optimal. For the problem of synergetic plane change with aeroglide, a difficult terminal boundary layer control problem arises which to date is found to be analytically intractable. Thus a predictive/corrective solution was developed to satisfy the terminal constraints on altitude and flight path angle. A composite guidance solution was obtained by combining the optimal reentry solution with the predictive/corrective guidance method. Numerical comparisons with the corresponding optimal trajectory solutions show that the resulting performance is very close to optimal. An attempt was made to obtain numerically optimized trajectories for the case where heating rate is constrained. A first order state variable inequality constraint was imposed on the full order AOTV point mass equations of motion, using a simple aerodynamic heating rate model.

  20. Problem-based learning on quantitative analytical chemistry course

    NASA Astrophysics Data System (ADS)

    Fitri, Noor

    2017-12-01

    This research applies problem-based learning method on chemical quantitative analytical chemistry, so called as "Analytical Chemistry II" course, especially related to essential oil analysis. The learning outcomes of this course include aspects of understanding of lectures, the skills of applying course materials, and the ability to identify, formulate and solve chemical analysis problems. The role of study groups is quite important in improving students' learning ability and in completing independent tasks and group tasks. Thus, students are not only aware of the basic concepts of Analytical Chemistry II, but also able to understand and apply analytical concepts that have been studied to solve given analytical chemistry problems, and have the attitude and ability to work together to solve the problems. Based on the learning outcome, it can be concluded that the problem-based learning method in Analytical Chemistry II course has been proven to improve students' knowledge, skill, ability and attitude. Students are not only skilled at solving problems in analytical chemistry especially in essential oil analysis in accordance with local genius of Chemistry Department, Universitas Islam Indonesia, but also have skilled work with computer program and able to understand material and problem in English.

  1. Modeling of the Global Water Cycle - Analytical Models

    Treesearch

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  2. Fractals and Spatial Methods for Mining Remote Sensing Imagery

    NASA Technical Reports Server (NTRS)

    Lam, Nina; Emerson, Charles; Quattrochi, Dale

    2003-01-01

    The rapid increase in digital remote sensing and GIS data raises a critical problem -- how can such an enormous amount of data be handled and analyzed so that useful information can be derived quickly? Efficient handling and analysis of large spatial data sets is central to environmental research, particularly in global change studies that employ time series. Advances in large-scale environmental monitoring and modeling require not only high-quality data, but also reliable tools to analyze the various types of data. A major difficulty facing geographers and environmental scientists in environmental assessment and monitoring is that spatial analytical tools are not easily accessible. Although many spatial techniques have been described recently in the literature, they are typically presented in an analytical form and are difficult to transform to a numerical algorithm. Moreover, these spatial techniques are not necessarily designed for remote sensing and GIS applications, and research must be conducted to examine their applicability and effectiveness in different types of environmental applications. This poses a chicken-and-egg problem: on one hand we need more research to examine the usability of the newer techniques and tools, yet on the other hand, this type of research is difficult to conduct if the tools to be explored are not accessible. Another problem that is fundamental to environmental research are issues related to spatial scale. The scale issue is especially acute in the context of global change studies because of the need to integrate remote-sensing and other spatial data that are collected at different scales and resolutions. Extrapolation of results across broad spatial scales remains the most difficult problem in global environmental research. There is a need for basic characterization of the effects of scale on image data, and the techniques used to measure these effects must be developed and implemented to allow for a multiple scale assessment of the data before any useful process-oriented modeling involving scale-dependent data can be conducted. Through the support of research grants from NASA, we have developed a software module called ICAMS (Image Characterization And Modeling System) to address the need to develop innovative spatial techniques and make them available to the broader scientific communities. ICAMS provides new spatial techniques, such as fractal analysis, geostatistical functions, and multiscale analysis that are not easily available in commercial GIS/image processing software. By bundling newer spatial methods in a user-friendly software module, researchers can begin to test and experiment with the new spatial analysis methods and they can gauge scale effects using a variety of remote sensing imagery. In the following, we describe briefly the development of ICAMS and present application examples.

  3. Using soft systems methodology to develop a simulation of out-patient services.

    PubMed

    Lehaney, B; Paul, R J

    1994-10-01

    Discrete event simulation is an approach to modelling a system in the form of a set of mathematical equations and logical relationships, usually used for complex problems, which are difficult to address by using analytical or numerical methods. Managing out-patient services is such a problem. However, simulation is not in itself a systemic approach, in that it provides no methodology by which system boundaries and system activities may be identified. The investigation considers the use of soft systems methodology as an aid to drawing system boundaries and identifying system activities, for the purpose of simulating the outpatients' department at a local hospital. The long term aims are to examine the effects that the participative nature of soft systems methodology has on the acceptability of the simulation model, and to provide analysts and managers with a process that may assist in planning strategies for health care.

  4. Numerical approach for unstructured quantum key distribution

    PubMed Central

    Coles, Patrick J.; Metodiev, Eric M.; Lütkenhaus, Norbert

    2016-01-01

    Quantum key distribution (QKD) allows for communication with security guaranteed by quantum theory. The main theoretical problem in QKD is to calculate the secret key rate for a given protocol. Analytical formulas are known for protocols with symmetries, since symmetry simplifies the analysis. However, experimental imperfections break symmetries, hence the effect of imperfections on key rates is difficult to estimate. Furthermore, it is an interesting question whether (intentionally) asymmetric protocols could outperform symmetric ones. Here we develop a robust numerical approach for calculating the key rate for arbitrary discrete-variable QKD protocols. Ultimately this will allow researchers to study ‘unstructured' protocols, that is, those that lack symmetry. Our approach relies on transforming the key rate calculation to the dual optimization problem, which markedly reduces the number of parameters and hence the calculation time. We illustrate our method by investigating some unstructured protocols for which the key rate was previously unknown. PMID:27198739

  5. Thermal drilling in planetary ices: an analytic solution with application to planetary protection problems of radioisotope power sources.

    PubMed

    Lorenz, Ralph D

    2012-08-01

    Thermal drilling has been applied to studies of glaciers on Earth and proposed for study of the martian ice caps and the crust of Europa. Additionally, inadvertent thermal drilling by radioisotope sources released from the breakup of a space vehicle is of astrobiological concern in that this process may form a downward-propagating "warm little pond" that could convey terrestrial biota to a habitable environment. A simple analytic solution to the asymptotic slow-speed case of thermal drilling is noted and used to show that the high thermal conductivity of the low-temperature ice on Europa and Titan makes thermal drilling qualitatively more difficult than at Mars. It is shown that an isolated General Purpose Heat Source (GPHS) "brick" can drill effectively on Earth or Mars, whereas on Titan or Europa with ice at 100 K, the source would stall and become stuck in the ice with a surface temperature of <200 K.

  6. Variational Principles for Buckling of Microtubules Modeled as Nonlocal Orthotropic Shells

    PubMed Central

    2014-01-01

    A variational principle for microtubules subject to a buckling load is derived by semi-inverse method. The microtubule is modeled as an orthotropic shell with the constitutive equations based on nonlocal elastic theory and the effect of filament network taken into account as an elastic surrounding. Microtubules can carry large compressive forces by virtue of the mechanical coupling between the microtubules and the surrounding elastic filament network. The equations governing the buckling of the microtubule are given by a system of three partial differential equations. The problem studied in the present work involves the derivation of the variational formulation for microtubule buckling. The Rayleigh quotient for the buckling load as well as the natural and geometric boundary conditions of the problem is obtained from this variational formulation. It is observed that the boundary conditions are coupled as a result of nonlocal formulation. It is noted that the analytic solution of the buckling problem for microtubules is usually a difficult task. The variational formulation of the problem provides the basis for a number of approximate and numerical methods of solutions and furthermore variational principles can provide physical insight into the problem. PMID:25214886

  7. A probabilistic and multi-objective analysis of lexicase selection and ε-lexicase selection.

    PubMed

    Cava, William La; Helmuth, Thomas; Spector, Lee; Moore, Jason H

    2018-05-10

    Lexicase selection is a parent selection method that considers training cases individually, rather than in aggregate, when performing parent selection. Whereas previous work has demonstrated the ability of lexicase selection to solve difficult problems in program synthesis and symbolic regression, the central goal of this paper is to develop the theoretical underpinnings that explain its performance. To this end, we derive an analytical formula that gives the expected probabilities of selection under lexicase selection, given a population and its behavior. In addition, we expand upon the relation of lexicase selection to many-objective optimization methods to describe the behavior of lexicase selection, which is to select individuals on the boundaries of Pareto fronts in high-dimensional space. We show analytically why lexicase selection performs more poorly for certain sizes of population and training cases, and show why it has been shown to perform more poorly in continuous error spaces. To address this last concern, we propose new variants of ε-lexicase selection, a method that modifies the pass condition in lexicase selection to allow near-elite individuals to pass cases, thereby improving selection performance with continuous errors. We show that ε-lexicase outperforms several diversity-maintenance strategies on a number of real-world and synthetic regression problems.

  8. The detection and correction of outlying determinations that may occur during geochemical analysis

    USGS Publications Warehouse

    Harvey, P.K.

    1974-01-01

    'Wild', 'rogue' or outlying determinations occur periodically during geochemical analysis. Existing tests in the literature for the detection of such determinations within a set of replicate measurements are often misleading. This account describes the chances of detecting outliers and the extent to which correction may be made for their presence in sample sizes of three to seven replicate measurements. A systematic procedure for monitoring data for outliers is outlined. The problem of outliers becomes more important as instrumental methods of analysis become faster and more highly automated; a state in which it becomes increasingly difficult for the analyst to examine every determination. The recommended procedure is easily adapted to such analytical systems. ?? 1974.

  9. A useful approximation for the flat surface impulse response

    NASA Technical Reports Server (NTRS)

    Brown, Gary S.

    1989-01-01

    The flat surface impulse response (FSIR) is a very useful quantity in computing the mean return power for near-nadir-oriented short-pulse radar altimeters. However, for very small antenna beamwidths and relatively large pointing angles, previous analytical descriptions become very difficult to compute accurately. An asymptotic approximation is developed to overcome these computational problems. Since accuracy is of key importance, a condition is developed under which this solution is within 2 percent of the exact answer. The asymptotic solution is shown to be in functional agreement with a conventional clutter power result and gives a 1.25-dB correction to this formula to account properly for the antenna-pattern variation over the illuminated area.

  10. 3-MCPD in food other than soy sauce or hydrolysed vegetable protein (HVP).

    PubMed

    Baer, Ines; de la Calle, Beatriz; Taylor, Philip

    2010-01-01

    This review gives an overview of current knowledge about 3-monochloropropane-1,2-diol (3-MCPD) formation and detection. Although 3-MCPD is often mentioned with regard to soy sauce and acid-hydrolysed vegetable protein (HVP), and much research has been done in that area, the emphasis here is placed on other foods. This contaminant can be found in a great variety of foodstuffs and is difficult to avoid in our daily nutrition. Despite its low concentration in most foods, its carcinogenic properties are of general concern. Its formation is a multivariate problem influenced by factors such as heat, moisture and sugar/lipid content, depending on the type of food and respective processing employed. Understanding the formation of this contaminant in food is fundamental to not only preventing or reducing it, but also developing efficient analytical methods of detecting it. Considering the differences between 3-MCPD-containing foods, and the need to test for the contaminant at different levels of food processing, one would expect a variety of analytical approaches. In this review, an attempt is made to provide an up-to-date list of available analytical methods and to highlight the differences among these techniques. Finally, the emergence of 3-MCPD esters and analytical techniques for them are also discussed here, although they are not the main focus of this review.

  11. Trace element partitioning between plagioclase and melt: An investigation of the impact of experimental and analytical procedures

    NASA Astrophysics Data System (ADS)

    Nielsen, Roger L.; Ustunisik, Gokce; Weinsteiger, Allison B.; Tepley, Frank J.; Johnston, A. Dana; Kent, Adam J. R.

    2017-09-01

    Quantitative models of petrologic processes require accurate partition coefficients. Our ability to obtain accurate partition coefficients is constrained by their dependence on pressure temperature and composition, and on the experimental and analytical techniques we apply. The source and magnitude of error in experimental studies of trace element partitioning may go unrecognized if one examines only the processed published data. The most important sources of error are relict crystals, and analyses of more than one phase in the analytical volume. Because we have typically published averaged data, identification of compromised data is difficult if not impossible. We addressed this problem by examining unprocessed data from plagioclase/melt partitioning experiments, by comparing models based on that data with existing partitioning models, and evaluated the degree to which the partitioning models are dependent on the calibration data. We found that partitioning models are dependent on the calibration data in ways that result in erroneous model values, and that the error will be systematic and dependent on the value of the partition coefficient. In effect, use of different calibration datasets will result in partitioning models whose results are systematically biased, and that one can arrive at different and conflicting conclusions depending on how a model is calibrated, defeating the purpose of applying the models. Ultimately this is an experimental data problem, which can be solved if we publish individual analyses (not averages) or use a projection method wherein we use an independent compositional constraint to identify and estimate the uncontaminated composition of each phase.

  12. The "Forgotten" Pseudomomenta and Gauge Changes in Generalized Landau Level Problems: Spatially Nonuniform Magnetic and Temporally Varying Electric Fields

    NASA Astrophysics Data System (ADS)

    Konstantinou, Georgios; Moulopoulos, Konstantinos

    2017-05-01

    By perceiving gauge invariance as an analytical tool in order to get insight into the states of the "generalized Landau problem" (a charged quantum particle moving inside a magnetic, and possibly electric field), and motivated by an early article that correctly warns against a naive use of gauge transformation procedures in the usual Landau problem (i.e. with the magnetic field being static and uniform), we first show how to bypass the complications pointed out in that article by solving the problem in full generality through gauge transformation techniques in a more appropriate manner. Our solution provides in simple and closed analytical forms all Landau Level-wavefunctions without the need to specify a particular vector potential. This we do by proper handling of the so-called pseudomomentum ěc {{K}} (or of a quantity that we term pseudo-angular momentum L z ), a method that is crucially different from the old warning argument, but also from standard treatments in textbooks and in research literature (where the usual Landau-wavefunctions are employed - labeled with canonical momenta quantum numbers). Most importantly, we go further by showing that a similar procedure can be followed in the more difficult case of spatially-nonuniform magnetic fields: in such case we define ěc {{K}} and L z as plausible generalizations of the previous ordinary case, namely as appropriate line integrals of the inhomogeneous magnetic field - our method providing closed analytical expressions for all stationary state wavefunctions in an easy manner and in a broad set of geometries and gauges. It can thus be viewed as complementary to the few existing works on inhomogeneous magnetic fields, that have so far mostly focused on determining the energy eigenvalues rather than the corresponding eigenkets (on which they have claimed that, even in the simplest cases, it is not possible to obtain in closed form the associated wavefunctions). The analytical forms derived here for these wavefunctions enable us to also provide explicit Berry's phase calculations and a quick study of their connection to probability currents and to some recent interesting issues in elementary Quantum Mechanics and Condensed Matter Physics. As an added feature, we also show how the possible presence of an additional electric field can be treated through a further generalization of pseudomomenta and their proper handling.

  13. Query Optimization in Distributed Databases.

    DTIC Science & Technology

    1982-10-01

    general, the strategy a31 a11 a 3 is more time comsuming than the strategy a, a, and sually we do not use it. Since the semijoin of R.XJ> RS requires...analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are difficult to obtain, some...is the study of the analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are

  14. Gender-based generalisations in school nurses' appraisals of and interventions addressing students' mental health.

    PubMed

    Rosvall, Per-Åke; Nilsson, Stefan

    2016-08-30

    There has been an increase of reports describing mental health problems in adolescents, especially girls. School nurses play an important role in supporting young people with health problems. Few studies have considered how the nurses' gender norms may influence their discussions. To investigate this issue, semi-structured interviews focusing on school nurses' work with students who have mental health problems were conducted. Transcripts of interviews with Swedish school nurses (n = 15) from the Help overcoming pain early project (HOPE) were analysed using theories on gender as a theoretical framework and then organised into themes related to the school nurses' provision of contact and intervention. The interviewees were all women, aged between 42-63 years, who had worked as nurses for 13-45 years, and as school nurses for 2-28 years. Five worked in upper secondary schools (for students aged 16-19) and 10 in secondary schools (for students aged 12-16). The results show that school nurses more commonly associated mental health problems with girls. When the school nurses discussed students that were difficult to reach, boys in particular were mentioned. However, very few nurses mentioned specific intervention to address students' mental health problems, and all of the mentioned interventions were focused on girls. Some of the school nurses reported that it was more difficult to initiate a health dialogue with boys, yet none of the nurses had organized interventions for the boys. We conclude that generalisations can sometimes be analytically helpful, facilitating, for instance, the identification of problems in school nurses' work methods and interventions. However, the most important conclusion from our research, which applied a design that is not commonly used, is that more varied approaches, as well as a greater awareness of potential gender stereotype pitfalls, are necessary to meet the needs of diverse student groups.

  15. A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback

    PubMed Central

    Maass, Wolfgang

    2008-01-01

    Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how behaviorally relevant adaptive changes in complex networks of spiking neurons could be achieved in a self-organizing manner through local synaptic plasticity. However, the capabilities and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allows us to predict under which conditions reward-modulated STDP will achieve a desired learning effect. These analytical results imply that neurons can learn through reward-modulated STDP to classify not only spatial but also temporal firing patterns of presynaptic neurons. They also can learn to respond to specific presynaptic firing patterns with particular spike patterns. Finally, the resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP. This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys were rewarded for increasing the firing rate of a particular neuron in the cortex and were able to solve this extremely difficult credit assignment problem. Our model for this experiment relies on a combination of reward-modulated STDP with variable spontaneous firing activity. Hence it also provides a possible functional explanation for trial-to-trial variability, which is characteristic for cortical networks of neurons but has no analogue in currently existing artificial computing systems. In addition our model demonstrates that reward-modulated STDP can be applied to all synapses in a large recurrent neural network without endangering the stability of the network dynamics. PMID:18846203

  16. Present status of computational tools for maglev development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z.; Chen, S.S.; Rote, D.M.

    1991-10-01

    High-speed vehicles that employ magnetic levitation (maglev) have received great attention worldwide as a means of relieving both highway and air-traffic congestion. At this time, Japan and Germany are leading the development of maglev. After fifteen years of inactivity that is attributed to technical policy decisions, the federal government of the United States has reconsidered the possibility of using maglev in the United States. The National Maglev Initiative (NMI) was established in May 1990 to assess the potential of maglev in the United States. One of the tasks of the NMI, which is also the objective of this report, ismore » to determine the status of existing computer software that can be applied to maglev-related problems. The computational problems involved in maglev assessment, research, and development can be classified into two categories: electromagnetic and mechanical. Because most maglev problems are complicated and difficult to solve analytically, proper numerical methods are needed to find solutions. To determine the status of maglev-related software, developers and users of computer codes were surveyed. The results of the survey are described in this report. 25 refs.« less

  17. Numerical solution of a conspicuous consumption model with constant control delay☆

    PubMed Central

    Huschto, Tony; Feichtinger, Gustav; Hartl, Richard F.; Kort, Peter M.; Sager, Sebastian; Seidl, Andrea

    2011-01-01

    We derive optimal pricing strategies for conspicuous consumption products in periods of recession. To that end, we formulate and investigate a two-stage economic optimal control problem that takes uncertainty of the recession period length and delay effects of the pricing strategy into account. This non-standard optimal control problem is difficult to solve analytically, and solutions depend on the variable model parameters. Therefore, we use a numerical result-driven approach. We propose a structure-exploiting direct method for optimal control to solve this challenging optimization problem. In particular, we discretize the uncertainties in the model formulation by using scenario trees and target the control delays by introduction of slack control functions. Numerical results illustrate the validity of our approach and show the impact of uncertainties and delay effects on optimal economic strategies. During the recession, delayed optimal prices are higher than the non-delayed ones. In the normal economic period, however, this effect is reversed and optimal prices with a delayed impact are smaller compared to the non-delayed case. PMID:22267871

  18. Dynamic optimization case studies in DYNOPT tool

    NASA Astrophysics Data System (ADS)

    Ozana, Stepan; Pies, Martin; Docekal, Tomas

    2016-06-01

    Dynamic programming is typically applied to optimization problems. As the analytical solutions are generally very difficult, chosen software tools are used widely. These software packages are often third-party products bound for standard simulation software tools on the market. As typical examples of such tools, TOMLAB and DYNOPT could be effectively applied for solution of problems of dynamic programming. DYNOPT will be presented in this paper due to its licensing policy (free product under GPL) and simplicity of use. DYNOPT is a set of MATLAB functions for determination of optimal control trajectory by given description of the process, the cost to be minimized, subject to equality and inequality constraints, using orthogonal collocation on finite elements method. The actual optimal control problem is solved by complete parameterization both the control and the state profile vector. It is assumed, that the optimized dynamic model may be described by a set of ordinary differential equations (ODEs) or differential-algebraic equations (DAEs). This collection of functions extends the capability of the MATLAB Optimization Tool-box. The paper will introduce use of DYNOPT in the field of dynamic optimization problems by means of case studies regarding chosen laboratory physical educational models.

  19. Comparative study of high-resolution shock-capturing schemes for a real gas

    NASA Technical Reports Server (NTRS)

    Montagne, J.-L.; Yee, H. C.; Vinokur, M.

    1987-01-01

    Recently developed second-order explicit shock-capturing methods, in conjunction with generalized flux-vector splittings, and a generalized approximate Riemann solver for a real gas are studied. The comparisons are made on different one-dimensional Riemann (shock-tube) problems for equilibrium air with various ranges of Mach numbers, densities and pressures. Six different Riemann problems are considered. These tests provide a check on the validity of the generalized formulas, since theoretical prediction of their properties appears to be difficult because of the non-analytical form of the state equation. The numerical results in the supersonic and low-hypersonic regimes indicate that these produce good shock-capturing capability and that the shock resolution is only slightly affected by the state equation of equilibrium air. The difference in shock resolution between the various methods varies slightly from one Riemann problem to the other, but the overall accuracy is very similar. For the one-dimensional case, the relative efficiency in terms of operation count for the different methods is within 30%. The main difference between the methods lies in their versatility in being extended to multidimensional problems with efficient implicit solution procedures.

  20. Improved online δ18O measurements of nitrogen- and sulfur-bearing organic materials and a proposed analytical protocol

    USGS Publications Warehouse

    Qi, H.; Coplen, T.B.; Wassenaar, L.I.

    2011-01-01

    It is well known that N2 in the ion source of a mass spectrometer interferes with the CO background during the δ18O measurement of carbon monoxide. A similar problem arises with the high-temperature conversion (HTC) analysis of nitrogenous O-bearing samples (e.g. nitrates and keratins) to CO for δ18O measurement, where the sample introduces a significant N2 peak before the CO peak, making determination of accurate oxygen isotope ratios difficult. Although using a gas chromatography (GC) column longer than that commonly provided by manufacturers (0.6 m) can improve the efficiency of separation of CO and N2 and using a valve to divert nitrogen and prevent it from entering the ion source of a mass spectrometer improved measurement results, biased δ18O values could still be obtained. A careful evaluation of the performance of the GC separation column was carried out. With optimal GC columns, the δ18O reproducibility of human hair keratins and other keratin materials was better than ±0.15 ‰ (n = 5; for the internal analytical reproducibility), and better than ±0.10 ‰ (n = 4; for the external analytical reproducibility).

  1. Yeast-based biosensors: design and applications.

    PubMed

    Adeniran, Adebola; Sherer, Michael; Tyo, Keith E J

    2015-02-01

    Yeast-based biosensing (YBB) is an exciting research area, as many studies have demonstrated the use of yeasts to accurately detect specific molecules. Biosensors incorporating various yeasts have been reported to detect an incredibly large range of molecules including but not limited to odorants, metals, intracellular metabolites, carcinogens, lactate, alcohols, and sugars. We review the detection strategies available for different types of analytes, as well as the wide range of output methods that have been incorporated with yeast biosensors. We group biosensors into two categories: those that are dependent upon transcription of a gene to report the detection of a desired molecule and those that are independent of this reporting mechanism. Transcription-dependent biosensors frequently depend on heterologous expression of sensing elements from non-yeast organisms, a strategy that has greatly expanded the range of molecules available for detection by YBBs. Transcription-independent biosensors circumvent the problem of sensing difficult-to-detect analytes by instead relying on yeast metabolism to generate easily detected molecules when the analyte is present. The use of yeast as the sensing element in biosensors has proven to be successful and continues to hold great promise for a variety of applications. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permission@oup.com.

  2. A Fixed-point Scheme for the Numerical Construction of Magnetohydrostatic Atmospheres in Three Dimensions

    NASA Astrophysics Data System (ADS)

    Gilchrist, S. A.; Braun, D. C.; Barnes, G.

    2016-12-01

    Magnetohydrostatic models of the solar atmosphere are often based on idealized analytic solutions because the underlying equations are too difficult to solve in full generality. Numerical approaches, too, are often limited in scope and have tended to focus on the two-dimensional problem. In this article we develop a numerical method for solving the nonlinear magnetohydrostatic equations in three dimensions. Our method is a fixed-point iteration scheme that extends the method of Grad and Rubin ( Proc. 2nd Int. Conf. on Peaceful Uses of Atomic Energy 31, 190, 1958) to include a finite gravity force. We apply the method to a test case to demonstrate the method in general and our implementation in code in particular.

  3. The inference of atmospheric ozone using satellite nadir measurements in the 1042/cm band

    NASA Technical Reports Server (NTRS)

    Russell, J. M., III; Drayson, S. R.

    1973-01-01

    A description and detailed analysis of a technique for inferring atmospheric ozone information from satellite nadir measurements in the 1042 cm band are presented. A method is formulated for computing the emission from the lower boundary under the satellite which circumvents the difficult analytical problems caused by the presence of atmospheric clouds and the watervapor continuum absorption. The inversion equations are expanded in terms of the eigenvectors and eigenvalues of a least-squares-solution matrix, and an analysis is performed to determine the information content of the radiance measurements. Under favorable conditions there are only two pieces of independent information available from the measurements: (1) the total ozone and (2) the altitude of the primary maximum in the ozone profile.

  4. Thin-layer chromatography with stationary phase gradient as a method for separation of water-soluble vitamins.

    PubMed

    Cimpoiu, Claudia; Hosu, Anamaria; Puscas, Anitta

    2012-02-03

    The group of hydrophilic vitamins play an important role in human health, and their lack or excess produces specific diseases. Therefore, the analysis of these compounds is indispensable for monitoring their content in pharmaceuticals and food in order to prevent some human diseases. TLC was successfully applied in the analysis of hydrophilic vitamins, but the most difficult problem in the simultaneous analysis of all these compounds is to find an optimum stationary phase-mobile phase system due to different chemical characteristics of analytes. Unfortunately structural analogues are difficult to separate in one chromatographic run, and this is the case in hydrophilic vitamins investigations. TLC gives the possibility to perform two-dimensional separations by using stationary phase gradient achieving the highest resolution by combining two systems with different selectivity. The goal of this work was to develop a method of analysis enabling separation of hydrophilic vitamins using TLC with adsorbent gradient. The developed method was used for identifying the water-soluble vitamins in alcoholic extracts of Hippophae rhamnoides and of Ribes nigrum. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. An Investigation to Manufacturing Analytical Services Composition using the Analytical Target Cascading Method.

    PubMed

    Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas

    2017-01-01

    As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.

  6. The Analytic Hierarchy Process and Participatory Decisionmaking

    Treesearch

    Daniel L. Schmoldt; Daniel L. Peterson; Robert L. Smith

    1995-01-01

    Managing natural resource lands requires social, as well as biophysical, considerations. Unfortunately, it is extremely difficult to accurately assess and quantify changing social preferences, and to aggregate conflicting opinions held by diverse social groups. The Analytic Hierarchy Process (AHP) provides a systematic, explicit, rigorous, and robust mechanism for...

  7. Critical and systematic evaluation of data for estimating human exposures to 2,4-dichlorophenoxyacetic acid (2,4-D) - quality and generalizability.

    PubMed

    LaKind, Judy S; Burns, Carol J; Naiman, Daniel Q; O'Mahony, Cian; Vilone, Giulia; Burns, Annette J; Naiman, Joshua S

    2017-01-01

    The herbicide 2,4-dichlorophenoxyacetic acid (2,4-D) has been commercially available since the 1940's. Despite decades of data on 2,4-D in food, air, soil, and water, as well as in humans, the quality the quality of these data has not been comprehensively evaluated. Using selected elements of the Biomonitoring, Environmental Epidemiology, and Short-lived Chemicals (BEES-C) instrument (temporal variability, avoidance of sample contamination, analyte stability, and urinary methods of matrix adjustment), the quality of 156 publications of environmental- and biomonitoring-based 2,4-D data was examined. Few publications documented steps were taken to avoid sample contamination. Similarly, most studies did not demonstrate the stability of the analyte from sample collection to analysis. Less than half of the biomonitoring publications reported both creatinine-adjusted and unadjusted urine concentrations. The scope and detail of data needed to assess temporal variability and sources of 2,4-D varied widely across the reviewed studies. Exposures to short-lived chemicals such as 2,4-D are impacted by numerous and changing external factors including application practices and formulations. At a minimum, greater transparency in reporting of quality control measures is needed. Perhaps the greatest challenge for the exposure community is the ability to reach consensus on how to address problems specific to short-lived chemical exposures in observational epidemiology investigations. More extensive conversations are needed to advance our understanding of human exposures and enable interpretation of these data to catch up to analytical capabilities. The problems defined in this review remain exquisitely difficult to address for chemicals like 2,4-D, with short and variable environmental and physiological half-lives and with exposures impacted by numerous and changing external factors.

  8. Self-Consistent Field Theory of Gaussian Ring Polymers

    NASA Astrophysics Data System (ADS)

    Kim, Jaeup; Yang, Yong-Biao; Lee, Won Bo

    2012-02-01

    Ring polymers, being free from chain ends, have fundamental importance in understanding the polymer statics and dynamics which are strongly influenced by the chain end effects. At a glance, their theoretical treatment may not seem particularly difficult, but the absence of chain ends and the topological constraints make the problem non-trivial, which results in limited success in the analytical or semi-analytical formulation of ring polymer theory. Here, I present a self-consistent field theory (SCFT) formalism of Gaussian (topologically unconstrained) ring polymers for the first time. The resulting static property of homogeneous and inhomogeneous ring polymers are compared with the random phase approximation (RPA) results. The critical point for ring homopolymer system is exactly the same as the linear polymer case, χN = 2, since a critical point does not depend on local structures of polymers. The critical point for ring diblock copolymer melts is χN 17.795, which is approximately 1.7 times of that of linear diblock copolymer melts, χN 10.495. The difference is due to the ring structure constraint.

  9. Enabling Analytics on Sensitive Medical Data with Secure Multi-Party Computation.

    PubMed

    Veeningen, Meilof; Chatterjea, Supriyo; Horváth, Anna Zsófia; Spindler, Gerald; Boersma, Eric; van der Spek, Peter; van der Galiën, Onno; Gutteling, Job; Kraaij, Wessel; Veugen, Thijs

    2018-01-01

    While there is a clear need to apply data analytics in the healthcare sector, this is often difficult because it requires combining sensitive data from multiple data sources. In this paper, we show how the cryptographic technique of secure multi-party computation can enable such data analytics by performing analytics without the need to share the underlying data. We discuss the issue of compliance to European privacy legislation; report on three pilots bringing these techniques closer to practice; and discuss the main challenges ahead to make fully privacy-preserving data analytics in the medical sector commonplace.

  10. Functional Assessment of Problem Behavior: Dispelling Myths, Overcoming Implementation Obstacles, and Developing New Lore

    PubMed Central

    2012-01-01

    Hundreds of studies have shown the efficacy of treatments for problem behavior based on an understanding of its function. Assertions regarding the legitimacy of different types of functional assessment vary substantially across published articles, and best practices regarding the functional assessment process are sometimes difficult to cull from the empirical literature or from published discussions of the behavioral assessment process. A number of myths regarding the functional assessment process, which appear to be pervasive within different behavior-analytic research and practice communities, will be reviewed in the context of an attempt to develop new lore regarding the functional assessment process. Frequently described obstacles to implementing a critical aspect of the functional assessment process, the functional analysis, will be reviewed in the context of solutions for overcoming them. Finally, the aspects of the functional assessment process that should be exported to others versus those features that should remain the sole technological property of behavior analysts will be discussed. PMID:23326630

  11. Finite Element Modeling of a Cylindrical Contact Using Hertzian Assumptions

    NASA Technical Reports Server (NTRS)

    Knudsen, Erik

    2003-01-01

    The turbine blades in the high-pressure fuel turbopump/alternate turbopump (HPFTP/AT) are subjected to hot gases rapidly flowing around them. This flow excites vibrations in the blades. Naturally, one has to worry about resonance, so a damping device was added to dissipate some energy from the system. The foundation is now laid for a very complex problem. The damper is in contact with the blade, so now there are contact stresses (both normal and tangential) to contend with. Since these stresses can be very high, it is not all that difficult to yield the material. Friction is another non-linearity and the blade is made out of a Nickel-based single-crystal superalloy that is orthotropic. A few approaches exist to solve such a problem and computer models, using contact elements, have been built with friction, plasticity, etc. These models are quite cumbersome and require many hours to solve just one load case and material orientation. A simpler approach is required. Ideally, the model should be simplified so the analysis can be conducted faster. When working with contact problems determining the contact patch and the stresses in the material are the main concerns. Closed-form solutions for non-conforming bodies, developed by Hertz, made out of isotropic materials are readily available. More involved solutions for 3-D cases using different materials are also available. The question is this: can Hertzian1 solutions be applied, or superimposed, to more complicated problems-like those involving anisotropic materials? That is the point of the investigation here. If these results agree with the more complicated computer models, then the analytical solutions can be used in lieu of the numerical solutions that take a very long time to process. As time goes on, the analytical solution will eventually have to include things like friction and plasticity. The models in this report use no contact elements and are essentially an applied load problem using Hertzian assumptions to determine the contact patch dimensions.

  12. Eco-analytical Methodology in Environmental Problems Monitoring

    NASA Astrophysics Data System (ADS)

    Agienko, M. I.; Bondareva, E. P.; Chistyakova, G. V.; Zhironkina, O. V.; Kalinina, O. I.

    2017-01-01

    Among the problems common to all mankind, which solutions influence the prospects of civilization, the problem of ecological situation monitoring takes very important place. Solution of this problem requires specific methodology based on eco-analytical comprehension of global issues. Eco-analytical methodology should help searching for the optimum balance between environmental problems and accelerating scientific and technical progress. The fact that Governments, corporations, scientists and nations focus on the production and consumption of material goods cause great damage to environment. As a result, the activity of environmentalists is developing quite spontaneously, as a complement to productive activities. Therefore, the challenge posed by the environmental problems for the science is the formation of geo-analytical reasoning and the monitoring of global problems common for the whole humanity. So it is expected to find the optimal trajectory of industrial development to prevent irreversible problems in the biosphere that could stop progress of civilization.

  13. The role of light microscopy in aerospace analytical laboratories

    NASA Technical Reports Server (NTRS)

    Crutcher, E. R.

    1977-01-01

    Light microscopy has greatly reduced analytical flow time and added new dimensions to laboratory capability. Aerospace analytical laboratories are often confronted with problems involving contamination, wear, or material inhomogeneity. The detection of potential problems and the solution of those that develop necessitate the most sensitive and selective applications of sophisticated analytical techniques and instrumentation. This inevitably involves light microscopy. The microscope can characterize and often identify the cause of a problem in 5-15 minutes with confirmatory tests generally less than one hour. Light microscopy has and will make a very significant contribution to the analytical capabilities of aerospace laboratories.

  14. Contraction of high eccentricity satellite orbits using uniformly regular KS canonical elements with oblate diurnally varying atmosphere.

    NASA Astrophysics Data System (ADS)

    Raj, Xavier James

    2016-07-01

    Accurate orbit prediction of an artificial satellite under the influence of air drag is one of the most difficult and untraceable problem in orbital dynamics. The orbital decay of these satellites is mainly controlled by the atmospheric drag effects. The effects of the atmosphere are difficult to determine, since the atmospheric density undergoes large fluctuations. The classical Newtonian equations of motion, which is non linear is not suitable for long-term integration. Many transformations have emerged in the literature to stabilize the equations of motion either to reduce the accumulation of local numerical errors or allowing the use of large integration step sizes, or both in the transformed space. One such transformation is known as KS transformation by Kustaanheimo and Stiefel, who regularized the nonlinear Kepler equations of motion and reduced it into linear differential equations of a harmonic oscillator of constant frequency. The method of KS total energy element equations has been found to be a very powerful method for obtaining numerical as well as analytical solution with respect to any type of perturbing forces, as the equations are less sensitive to round off and truncation errors. The uniformly regular KS canonical equations are a particular canonical form of the KS differential equations, where all the ten KS Canonical elements αi and βi are constant for unperturbed motion. These equations permit the uniform formulation of the basic laws of elliptic, parabolic and hyperbolic motion. Using these equations, developed analytical solution for short term orbit predictions with respect to Earth's zonal harmonic terms J2, J3, J4. Further, these equations were utilized to include the canonical forces and analytical theories with air drag were developed for low eccentricity orbits (e < 0.2) with different atmospheric models. Using uniformly regular KS canonical elements developed analytical theory for high eccentricity (e > 0.2) orbits by assuming the atmosphere to be oblate only. In this paper a new non-singular analytical theory is developed for the motion of high eccentricity satellite orbits with oblate diurnally varying atmosphere in terms of the uniformly regular KS canonical elements. The analytical solutions are generated up to fourth-order terms using a new independent variable and c (a small parameter dependent on the flattening of the atmosphere). Due to symmetry, only two of the nine equations need to be solved analytically to compute the state vector and change in energy at the end of each revolution. The theory is developed on the assumption that density is constant on the surfaces of spheroids of fixed ellipticity ɛ (equal to the Earth's ellipticity, 0.00335) whose axes coincide with the Earth's axis. Numerical experimentation with the analytical solution for a wide range of perigee height, eccentricity, and orbital inclination has been carried out up to 100 revolutions. Comparisons are made with numerically integrated values and found that they match quite well. Effectiveness of the present analytical solutions will be demonstrated by comparing the results with other analytical solutions in the literature.

  15. Educational Change in Post-Conflict Contexts: Reflections on the South African Experience 20 Years Later

    ERIC Educational Resources Information Center

    Christie, Pam

    2016-01-01

    Reflecting on South African experience, this paper develops an analytical framework using the work of Henri Lefebvre and Nancy Fraser to understand why socially just arrangements may be so difficult to achieve in post-conflict reconstruction. The paper uses Lefebvre's analytic to trace three sets of entangled practices…

  16. The bright side of being blue: Depression as an adaptation for analyzing complex problems

    PubMed Central

    Andrews, Paul W.; Thomson, J. Anderson

    2009-01-01

    Depression ranks as the primary emotional problem for which help is sought. Depressed people often have severe, complex problems, and rumination is a common feature. Depressed people often believe that their ruminations give them insight into their problems, but clinicians often view depressive rumination as pathological because it is difficult to disrupt and interferes with the ability to concentrate on other things. Abundant evidence indicates that depressive rumination involves the analysis of episode-related problems. Because analysis is time consuming and requires sustained processing, disruption would interfere with problem-solving. The analytical rumination (AR) hypothesis proposes that depression is an adaptation that evolved as a response to complex problems and whose function is to minimize disruption of rumination and sustain analysis of complex problems. It accomplishes this by giving episode-related problems priority access to limited processing resources, by reducing the desire to engage in distracting activities (anhedonia), and by producing psychomotor changes that reduce exposure to distracting stimuli. Because processing resources are limited, the inability to concentrate on other things is a tradeoff that must be made to sustain analysis of the triggering problem. The AR hypothesis is supported by evidence from many levels, including genes, neurotransmitters and their receptors, neurophysiology, neuroanatomy, neuroenergetics, pharmacology, cognition and behavior, and the efficacy of treatments. In addition, we address and provide explanations for puzzling findings in the cognitive and behavioral genetics literatures on depression. In the process, we challenge the belief that serotonin transmission is low in depression. Finally, we discuss implications of the hypothesis for understanding and treating depression. PMID:19618990

  17. Difficult relationships between parents and physicians of children with cancer: A qualitative study of parent and physician perspectives.

    PubMed

    Mack, Jennifer W; Ilowite, Maya; Taddei, Sarah

    2017-02-15

    Previous work on difficult relationships between patients and physicians has largely focused on the adult primary care setting and has typically held patients responsible for challenges. Little is known about experiences in pediatrics and more serious illness; therefore, we examined difficult relationships between parents and physicians of children with cancer. This was a cross-sectional, semistructured interview study of parents and physicians of children with cancer at the Dana-Farber Cancer Institute and Boston Children's Hospital (Boston, Mass) in longitudinal primary oncology relationships in which the parent, physician, or both considered the relationship difficult. Interviews were audiotaped, transcribed, and subjected to a content analysis. Dyadic parent and physician interviews were performed for 29 relationships. Twenty were experienced as difficult by both parents and physicians; 1 was experienced as difficult by the parent only; and 8 were experienced as difficult by the physician only. Parent experiences of difficult relationships were characterized by an impaired therapeutic alliance with physicians; physicians experienced difficult relationships as demanding. Core underlying issues included problems of connection and understanding (n = 8), confrontational parental advocacy (n = 16), mental health issues (n = 2), and structural challenges to care (n = 3). Although problems of connection and understanding often improved over time, problems of confrontational advocacy tended to solidify. Parents and physicians both experienced difficult relationships as highly distressing. Although prior conceptions of difficult relationships have held patients responsible for challenges, this study has found that difficult relationships follow several patterns. Some challenges, such as problems of connection and understanding, offer an opportunity for healing. However, confrontational advocacy appears especially refractory to repair; special consideration of these relationships and avenues for repairing them are needed. Cancer 2017;123:675-681. © 2016 American Cancer Society. © 2016 American Cancer Society.

  18. Improving the trust in results of numerical simulations and scientific data analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappello, Franck; Constantinescu, Emil; Hovland, Paul

    This white paper investigates several key aspects of the trust that a user can give to the results of numerical simulations and scientific data analytics. In this document, the notion of trust is related to the integrity of numerical simulations and data analytics applications. This white paper complements the DOE ASCR report on Cybersecurity for Scientific Computing Integrity by (1) exploring the sources of trust loss; (2) reviewing the definitions of trust in several areas; (3) providing numerous cases of result alteration, some of them leading to catastrophic failures; (4) examining the current notion of trust in numerical simulation andmore » scientific data analytics; (5) providing a gap analysis; and (6) suggesting two important research directions and their respective research topics. To simplify the presentation without loss of generality, we consider that trust in results can be lost (or the results’ integrity impaired) because of any form of corruption happening during the execution of the numerical simulation or the data analytics application. In general, the sources of such corruption are threefold: errors, bugs, and attacks. Current applications are already using techniques to deal with different types of corruption. However, not all potential corruptions are covered by these techniques. We firmly believe that the current level of trust that a user has in the results is at least partially founded on ignorance of this issue or the hope that no undetected corruptions will occur during the execution. This white paper explores the notion of trust and suggests recommendations for developing a more scientifically grounded notion of trust in numerical simulation and scientific data analytics. We first formulate the problem and show that it goes beyond previous questions regarding the quality of results such as V&V, uncertainly quantification, and data assimilation. We then explore the complexity of this difficult problem, and we sketch complementary general approaches to address it. This paper does not focus on the trust that the execution will actually complete. The product of simulation or of data analytic executions is the final element of a potentially long chain of transformations, where each stage has the potential to introduce harmful corruptions. These corruptions may produce results that deviate from the user-expected accuracy without notifying the user of this deviation. There are many potential sources of corruption before and during the execution; consequently, in this white paper we do not focus on the protection of the end result after the execution.« less

  19. Using genetic algorithms to determine near-optimal pricing, investment and operating strategies in the electric power industry

    NASA Astrophysics Data System (ADS)

    Wu, Dongjun

    Network industries have technologies characterized by a spatial hierarchy, the "network," with capital-intensive interconnections and time-dependent, capacity-limited flows of products and services through the network to customers. This dissertation studies service pricing, investment and business operating strategies for the electric power network. First-best solutions for a variety of pricing and investment problems have been studied. The evaluation of genetic algorithms (GA, which are methods based on the idea of natural evolution) as a primary means of solving complicated network problems, both w.r.t. pricing: as well as w.r.t. investment and other operating decisions, has been conducted. New constraint-handling techniques in GAs have been studied and tested. The actual application of such constraint-handling techniques in solving practical non-linear optimization problems has been tested on several complex network design problems with encouraging initial results. Genetic algorithms provide solutions that are feasible and close to optimal when the optimal solution is know; in some instances, the near-optimal solutions for small problems by the proposed GA approach can only be tested by pushing the limits of currently available non-linear optimization software. The performance is far better than several commercially available GA programs, which are generally inadequate in solving any of the problems studied in this dissertation, primarily because of their poor handling of constraints. Genetic algorithms, if carefully designed, seem very promising in solving difficult problems which are intractable by traditional analytic methods.

  20. Effects of blade-to-blade dissimilarities on rotor-body lead-lag dynamics

    NASA Technical Reports Server (NTRS)

    Mcnulty, M. J.

    1986-01-01

    Small blade-to-blade property differences are investigated to determine their effects on the behavior of a simple rotor-body system. An analytical approach is used which emphasizes the significance of these effects from the experimental point of view. It is found that the primary effect of blade-to-blade dissimilarities is the appearance of additional peaks in the frequency spectrum which are separated from the convention response modes by multiples of the rotor speed. These additional responses are potential experimental problems because when they occur near a mode of interest they act as contaminant frequencies which can make damping measurements difficult. The effects of increased rotor-body coupling and a rotor shaft degree of freedom act to improve the situation by altering the frequency separation of the modes.

  1. Explicit solutions from eigenfunction symmetry of the Korteweg-de Vries equation.

    PubMed

    Hu, Xiao-Rui; Lou, Sen-Yue; Chen, Yong

    2012-05-01

    In nonlinear science, it is very difficult to find exact interaction solutions among solitons and other kinds of complicated waves such as cnoidal waves and Painlevé waves. Actually, even if for the most well-known prototypical models such as the Kortewet-de Vries (KdV) equation and the Kadomtsev-Petviashvili (KP) equation, this kind of problem has not yet been solved. In this paper, the explicit analytic interaction solutions between solitary waves and cnoidal waves are obtained through the localization procedure of nonlocal symmetries which are related to Darboux transformation for the well-known KdV equation. The same approach also yields some other types of interaction solutions among different types of solutions such as solitary waves, rational solutions, Bessel function solutions, and/or general Painlevé II solutions.

  2. On the attenuation of sound by three-dimensionally segmented acoustic liners in a rectangular duct

    NASA Technical Reports Server (NTRS)

    Koch, W.

    1979-01-01

    Axial segmentation of acoustically absorbing liners in rectangular, circular or annual duct configurations is a very useful concept for obtaining higher noise attenuation with respect to the bandwidth of absorption as well as the maximum attenuation. As a consequence, advanced liner concepts are proposed which induce a modal energy transfer in both cross-sectional directions to further reduce the noise radiated from turbofan engines. However, these advanced liner concepts require three-dimensional geometries which are difficult to treat theoretically. A very simple three-dimensional problem is investigated analytically. The results show a strong dependence on the positioning of the liner for some incident source modes while the effect of three-dimensional segmentation appears to be negligible over the frequency range considered.

  3. Automatic high-throughput screening of colloidal crystals using machine learning

    NASA Astrophysics Data System (ADS)

    Spellings, Matthew; Glotzer, Sharon C.

    Recent improvements in hardware and software have united to pose an interesting problem for computational scientists studying self-assembly of particles into crystal structures: while studies covering large swathes of parameter space can be dispatched at once using modern supercomputers and parallel architectures, identifying the different regions of a phase diagram is often a serial task completed by hand. While analytic methods exist to distinguish some simple structures, they can be difficult to apply, and automatic identification of more complex structures is still lacking. In this talk we describe one method to create numerical ``fingerprints'' of local order and use them to analyze a study of complex ordered structures. We can use these methods as first steps toward automatic exploration of parameter space and, more broadly, the strategic design of new materials.

  4. Cluster randomization and political philosophy.

    PubMed

    Chwang, Eric

    2012-11-01

    In this paper, I will argue that, while the ethical issues raised by cluster randomization can be challenging, they are not new. My thesis divides neatly into two parts. In the first, easier part I argue that many of the ethical challenges posed by cluster randomized human subjects research are clearly present in other types of human subjects research, and so are not novel. In the second, more difficult part I discuss the thorniest ethical challenge for cluster randomized research--cases where consent is genuinely impractical to obtain. I argue that once again these cases require no new analytic insight; instead, we should look to political philosophy for guidance. In other words, the most serious ethical problem that arises in cluster randomized research also arises in political philosophy. © 2011 Blackwell Publishing Ltd.

  5. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    NASA Astrophysics Data System (ADS)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  6. Application of decision science to resilience management in Jamaica Bay

    USGS Publications Warehouse

    Eaton, Mitchell; Fuller, Angela K.; Johnson, Fred A.; Hare, M. P.; Stedman, Richard C.; Sanderson, E.W.; Solecki, W. D.; Waldman, J.R.; Paris, A. S.

    2016-01-01

    This book highlights the growing interest in management interventions designed to enhance the resilience of the Jamaica Bay socio-ecological system. Effective management, whether the focus is on managing biological processes or human behavior or (most likely) both, requires decision makers to anticipate how the managed system will respond to interventions (i.e., via predictions or projections). In systems characterized by many interacting components and high uncertainty, making probabilistic predictions is often difficult and requires careful thinking not only about system dynamics, but also about how management objectives are specified and the analytic method used to select the preferred action(s). Developing a clear statement of the problem(s) and articulation of management objectives is often best achieved by including input from managers, scientists and other stakeholders affected by the decision through a process of joint problem framing (Marcot and others 2012; Keeney and others 1990). Using a deliberate, coherent and transparent framework for deciding among management alternatives to best meet these objectives then ensures a greater likelihood for successful intervention. Decision science provides the theoretical and practical basis for developing this framework and applying decision analysis methods for making complex decisions under uncertainty and risk.

  7. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles.

    PubMed

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station's density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles.

  8. Approximated analytical solution to an Ebola optimal control problem

    NASA Astrophysics Data System (ADS)

    Hincapié-Palacio, Doracelly; Ospina, Juan; Torres, Delfim F. M.

    2016-11-01

    An analytical expression for the optimal control of an Ebola problem is obtained. The analytical solution is found as a first-order approximation to the Pontryagin Maximum Principle via the Euler-Lagrange equation. An implementation of the method is given using the computer algebra system Maple. Our analytical solutions confirm the results recently reported in the literature using numerical methods.

  9. The Difficult Patron in the Academic Library: Problem Issues or Problem Patrons?

    ERIC Educational Resources Information Center

    Simmonds Patience L.; Ingold, Jane L.

    2002-01-01

    Identifies difficult patron issues in academic libraries from the librarians' perspectives and offers solutions to try and prevent them from becoming problems. Topics include labeling academic library users; eliminating sources of conflict between faculty and library staff; and conflicts between students and library staff. (Author/LRW)

  10. Simulation and statistics: Like rhythm and song

    NASA Astrophysics Data System (ADS)

    Othman, Abdul Rahman

    2013-04-01

    Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.

  11. Designing for Student-Facing Learning Analytics

    ERIC Educational Resources Information Center

    Kitto, Kirsty; Lupton, Mandy; Davis, Kate; Waters, Zak

    2017-01-01

    Despite a narrative that sees learning analytics (LA) as a field that aims to enhance student learning, few student-facing solutions have emerged. This can make it difficult for educators to imagine how data can be used in the classroom, and in turn diminishes the promise of LA as an enabler for encouraging important skills such as sense-making,…

  12. The role of analytical science in natural resource decision making

    NASA Astrophysics Data System (ADS)

    Miller, Alan

    1993-09-01

    There is a continuing debate about the proper role of analytical (positivist) science in natural resource decision making. Two diametrically opposed views are evident, arguing for and against a more extended role for scientific information. The debate takes on a different complexion if one recognizes that certain kinds of problem, referred to here as “wicked” or “trans-science” problems, may not be amenable to the analytical process. Indeed, the mistaken application of analytical methods to trans-science problems may not only be a waste of time and money but also serve to hinder policy development. Since many environmental issues are trans-science in nature, then it follows that alternatives to analytical science need to be developed. In this article, the issues involved in the debate are clarified by examining the impact of the use of analytical methods in a particular case, the spruce budworm controversy in New Brunswick. The article ends with some suggestions about a “holistic” approach to the problem.

  13. Multiexponential models of (1+1)-dimensional dilaton gravity and Toda-Liouville integrable models

    NASA Astrophysics Data System (ADS)

    de Alfaro, V.; Filippov, A. T.

    2010-01-01

    We study general properties of a class of two-dimensional dilaton gravity (DG) theories with potentials containing several exponential terms. We isolate and thoroughly study a subclass of such theories in which the equations of motion reduce to Toda and Liouville equations. We show that the equation parameters must satisfy a certain constraint, which we find and solve for the most general multiexponential model. It follows from the constraint that integrable Toda equations in DG theories generally cannot appear without accompanying Liouville equations. The most difficult problem in the two-dimensional Toda-Liouville (TL) DG is to solve the energy and momentum constraints. We discuss this problem using the simplest examples and identify the main obstacles to solving it analytically. We then consider a subclass of integrable two-dimensional theories where scalar matter fields satisfy the Toda equations and the two-dimensional metric is trivial. We consider the simplest case in some detail. In this example, we show how to obtain the general solution. We also show how to simply derive wavelike solutions of general TL systems. In the DG theory, these solutions describe nonlinear waves coupled to gravity and also static states and cosmologies. For static states and cosmologies, we propose and study a more general one-dimensional TL model typically emerging in one-dimensional reductions of higher-dimensional gravity and supergravity theories. We especially attend to making the analytic structure of the solutions of the Toda equations as simple and transparent as possible.

  14. Troubleshooting in LC-MS/MS method for determining endocannabinoid and endocannabinoid-like molecules in rat brain structures applied to assessing the brain endocannabinoid/endovanilloid system significance.

    PubMed

    Bystrowska, Beata; Smaga, Irena; Tyszka-Czochara, Małgorzata; Filip, Małgorzata

    2014-05-01

    In recent years, a potential participation of endocannabinoids (eCBs) and related endocannabinoid-like molecules, including N-acylethanolamines (NAEs), in the physiological and pathophysiological processes has been highlighted, whereas measurement of their levels still remains difficult. The aim of this study was to develop a bioanalytical method that would enable researchers to simultaneously determine quantitatively eCBs (anandamide - AEA and 2-arachidonoylglycerol - 2-AG) and NAEs (oleoylethanolamide or oleoylethanolamine - OEA, palmitoylethanolamide or palmitoylethanolamine - PEA and linoleoylethanolamide or linoleoylethanolamine - LEA) in the rat brain. The analytical problems with analysis and possible solutions have been also shown. The methodology for quantifying eCBs/NAEs by means of a sensitive and selective liquid chromatography tandem mass spectrometry (LC-MS/MS) with electrospray positive ionization and multiple reaction monitoring (MRM) mode was developed and validated. Analytical problems with analyzed compounds were estimated. Reasonably high precision and accuracy of the method were demonstrated in the validation process. The method is linear up to 200 ng/g for AEA, OEA, PEA and LEA and up to 100 μg/g for 2-AG, while the quantification limit reaches 0.2 ng/g and 0.8 μg/g, respectively. Simplicity and rapidity of the assay allows analyzing many samples on a routine basis. This article presents the new procedure applied to the analysis of brain tissues.

  15. Happy software developers solve problems better: psychological measurements in empirical software engineering

    PubMed Central

    Wang, Xiaofeng; Abrahamsson, Pekka

    2014-01-01

    For more than thirty years, it has been claimed that a way to improve software developers’ productivity and software quality is to focus on people and to provide incentives to make developers satisfied and happy. This claim has rarely been verified in software engineering research, which faces an additional challenge in comparison to more traditional engineering fields: software development is an intellectual activity and is dominated by often-neglected human factors (called human aspects in software engineering research). Among the many skills required for software development, developers must possess high analytical problem-solving skills and creativity for the software construction process. According to psychology research, affective states—emotions and moods—deeply influence the cognitive processing abilities and performance of workers, including creativity and analytical problem solving. Nonetheless, little research has investigated the correlation between the affective states, creativity, and analytical problem-solving performance of programmers. This article echoes the call to employ psychological measurements in software engineering research. We report a study with 42 participants to investigate the relationship between the affective states, creativity, and analytical problem-solving skills of software developers. The results offer support for the claim that happy developers are indeed better problem solvers in terms of their analytical abilities. The following contributions are made by this study: (1) providing a better understanding of the impact of affective states on the creativity and analytical problem-solving capacities of developers, (2) introducing and validating psychological measurements, theories, and concepts of affective states, creativity, and analytical-problem-solving skills in empirical software engineering, and (3) raising the need for studying the human factors of software engineering by employing a multidisciplinary viewpoint. PMID:24688866

  16. Happy software developers solve problems better: psychological measurements in empirical software engineering.

    PubMed

    Graziotin, Daniel; Wang, Xiaofeng; Abrahamsson, Pekka

    2014-01-01

    For more than thirty years, it has been claimed that a way to improve software developers' productivity and software quality is to focus on people and to provide incentives to make developers satisfied and happy. This claim has rarely been verified in software engineering research, which faces an additional challenge in comparison to more traditional engineering fields: software development is an intellectual activity and is dominated by often-neglected human factors (called human aspects in software engineering research). Among the many skills required for software development, developers must possess high analytical problem-solving skills and creativity for the software construction process. According to psychology research, affective states-emotions and moods-deeply influence the cognitive processing abilities and performance of workers, including creativity and analytical problem solving. Nonetheless, little research has investigated the correlation between the affective states, creativity, and analytical problem-solving performance of programmers. This article echoes the call to employ psychological measurements in software engineering research. We report a study with 42 participants to investigate the relationship between the affective states, creativity, and analytical problem-solving skills of software developers. The results offer support for the claim that happy developers are indeed better problem solvers in terms of their analytical abilities. The following contributions are made by this study: (1) providing a better understanding of the impact of affective states on the creativity and analytical problem-solving capacities of developers, (2) introducing and validating psychological measurements, theories, and concepts of affective states, creativity, and analytical-problem-solving skills in empirical software engineering, and (3) raising the need for studying the human factors of software engineering by employing a multidisciplinary viewpoint.

  17. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management

    PubMed Central

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-01-01

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs. PMID:27983713

  18. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management.

    PubMed

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-12-15

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM³ ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.

  19. Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention

    PubMed Central

    Fisher, Brian; Smith, Jennifer; Pike, Ian

    2017-01-01

    Background: Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods: Inspired by the Delphi method, we introduced a novel methodology—group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders’ observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results: The GA methodology triggered the emergence of ‘common ground’ among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders’ verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusions: Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ‘common ground’ among diverse stakeholders about health data and their implications. PMID:28895928

  20. Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention.

    PubMed

    Al-Hajj, Samar; Fisher, Brian; Smith, Jennifer; Pike, Ian

    2017-09-12

    Background : Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods : Inspired by the Delphi method, we introduced a novel methodology-group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders' observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results : The GA methodology triggered the emergence of ' common g round ' among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders' verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusion s : Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ' common ground' among diverse stakeholders about health data and their implications.

  1. Relationship Between Leaving Children at Home Alone and Their Mental Health: Results From the A-CHILD Study in Japan.

    PubMed

    Doi, Satomi; Fujiwara, Takeo; Isumi, Aya; Ochi, Manami; Kato, Tsuguhiko

    2018-01-01

    Leaving children at home alone is considered a form of "neglect" in most developed countries. In Japan, this practice is not prohibited, probably because this country is considered to have relatively safe communities for children. The impact of leaving children at home alone on their mental health is a controversial issue, and few studies have examined it to date. The aim of this study was to examine the impact of leaving children aged 6 or 7 years at home alone on their mental health, focusing on both the positive and negative aspects; that is, resilience, difficult behavior, and prosocial behavior. Data from the Adachi Child Health Impact of Living Difficulty (A-CHILD) study were used. The caregivers of all children in the first grade in Adachi City, Tokyo, were targeted, of whom 80% completed the questionnaire ( n = 4,291). Among the analytical sample which comprises those who completed both exposure and outcome variables ( n = 4,195), 2,190 (52.2%) children had never been left at home alone, 1,581 (37.7%) children were left at home alone less than once a week, and 424 (10.1%) children were left at home alone once a week or more. Child resilience was measured using the Children's Resilient Coping Scale, and difficult behavior (emotional symptoms, conduct problems, hyperactivity/inattention, and peer relationship problems) and prosocial behavior using the Strength and Difficulty Questionnaire. Multivariate regression analyses were performed to examine the dose-response association between leaving children at home alone and child mental health, followed by propensity-score matching as a pseudo-randomized controlled trial to reduce potential confounding. The results showed that leaving children at home alone once a week or more, but not less than once a week, was associated with total difficulties scores, especially conduct problems, hyperactivity/inattention, and peer relationship problems. These findings indicate that leaving children at home alone should be avoided in Japan, as is recommended in North America.

  2. Why Do Disadvantaged Filipino Children Find Word Problems in English Difficult?

    ERIC Educational Resources Information Center

    Bautista, Debbie; Mulligan, Joanne

    2010-01-01

    Young Filipino students are expected to solve mathematical word problems in English, a language that many encounter only in schools. Using individual interviews of 17 Filipino children, we investigated why word problems in English are difficult and the extent to which the language interferes with performance. Results indicate that children could…

  3. Common statistical and research design problems in manuscripts submitted to high-impact psychiatry journals: what editors and reviewers want authors to know.

    PubMed

    Harris, Alex H S; Reeder, Rachelle; Hyun, Jenny K

    2009-10-01

    Journal editors and statistical reviewers are often in the difficult position of catching serious problems in submitted manuscripts after the research is conducted and data have been analyzed. We sought to learn from editors and reviewers of major psychiatry journals what common statistical and design problems they most often find in submitted manuscripts and what they wished to communicate to authors regarding these issues. Our primary goal was to facilitate communication between journal editors/reviewers and researchers/authors and thereby improve the scientific and statistical quality of research and submitted manuscripts. Editors and statistical reviewers of 54 high-impact psychiatry journals were surveyed to learn what statistical or design problems they encounter most often in submitted manuscripts. Respondents completed the survey online. The authors analyzed survey text responses using content analysis procedures to identify major themes related to commonly encountered statistical or research design problems. Editors and reviewers (n=15) who handle manuscripts from 39 different high-impact psychiatry journals responded to the survey. The most commonly cited problems regarded failure to map statistical models onto research questions, improper handling of missing data, not controlling for multiple comparisons, not understanding the difference between equivalence and difference trials, and poor controls in quasi-experimental designs. The scientific quality of psychiatry research and submitted reports could be greatly improved if researchers became sensitive to, or sought consultation on frequently encountered methodological and analytic issues.

  4. Analytical and numerical investigations of bubble behavior in electric fields

    NASA Astrophysics Data System (ADS)

    Vorreiter, Janelle Orae

    The behavior of gas bubbles in liquids is important in a wide range of applications. This study is motivated by a desire to understand the motion of bubbles in the absence of gravity, as in many aerospace applications. Phase-change devices, cryogenic tanks and life-support systems are some of the applications where bubbles exist in space environments. One of the main difficulties in employing devices with bubbles in zero gravity environments is the absence of a buoyancy force. The use of an electric field is found to be an effective means of replacing the buoyancy force, improving the control of bubbles in space environments. In this study, analytical and numerical investigations of bubble behavior under the influence of electric fields are performed. The problem is a difficult one in that the physics of the liquid and the electric field need to be considered simultaneously to model the dynamics of the bubble. Simplifications are required to reduce the problem to a tractable form. In this work, for the liquid and the electric field, assumptions are made which reduce the problem to one requiring only the solution of potentials in the domain of interest. Analytical models are developed using a perturbation analysis applicable for small deviations from a spherical shape. Numerical investigations are performed using a boundary integral code. A number of configurations are found to be successful in promoting bubble motion by varying properties of the electric fields. In one configuration, the natural frequencies of a bubble are excited using time-varying electric and pressure fields. The applied electric field is spatially uniform with frequencies corresponding to shape modes of the bubble. The resulting bubble velocity is related to the strength of the electric field as well as the characteristics of the applied fields. In another configuration, static non-uniform fields are used to encourage bubble motion. The resulting motion is related to the degree of non-uniformity of the applied field. Several geometries are investigated to study the relationship between electrode geometry and bubble behavior.

  5. Practical solution for control of the pre-analytical phase in decentralized clinical laboratories for meeting the requirements of the medical laboratory accreditation standard DIN EN ISO 15189.

    PubMed

    Vacata, Vladimir; Jahns-Streubel, Gerlinde; Baldus, Mirjana; Wood, William Graham

    2007-01-01

    This report was written in response to the article by Wood published recently in this journal. It describes a practical solution to the problems of controlling the pre-analytical phase in the clinical diagnostic laboratory. As an indicator of quality in the pre-analytical phase of sample processing, a target analyte was chosen which is sensitive to delay in centrifugation and/or analysis. The results of analyses of the samples sent by satellite medical practitioners were compared with those from an on-site hospital laboratory with a controllable optimized pre-analytical phase. The aim of the comparison was: (a) to identify those medical practices whose mean/median sample values significantly deviate from those of the control situation in the hospital laboratory due to the possible problems in the pre-analytical phase; (b) to aid these laboratories in the process of rectifying these problems. A Microsoft Excel-based Pre-Analytical Survey tool (PAS tool) has been developed which addresses the above mentioned problems. It has been tested on serum potassium which is known to be sensitive to delay and/or irregularities in sample treatment. The PAS tool has been shown to be one possibility for improving the quality of the analyses by identifying the sources of problems within the pre-analytical phase, thus allowing them to be rectified. Additionally, the PAS tool has an educational value and can also be adopted for use in other decentralized laboratories.

  6. Inorganic trace analysis by mass spectrometry

    NASA Astrophysics Data System (ADS)

    Becker, Johanna Sabine; Dietze, Hans-Joachim

    1998-10-01

    Mass spectrometric methods for the trace analysis of inorganic materials with their ability to provide a very sensitive multielemental analysis have been established for the determination of trace and ultratrace elements in high-purity materials (metals, semiconductors and insulators), in different technical samples (e.g. alloys, pure chemicals, ceramics, thin films, ion-implanted semiconductors), in environmental samples (waters, soils, biological and medical materials) and geological samples. Whereas such techniques as spark source mass spectrometry (SSMS), laser ionization mass spectrometry (LIMS), laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), glow discharge mass spectrometry (GDMS), secondary ion mass spectrometry (SIMS) and inductively coupled plasma mass spectrometry (ICP-MS) have multielemental capability, other methods such as thermal ionization mass spectrometry (TIMS), accelerator mass spectrometry (AMS) and resonance ionization mass spectrometry (RIMS) have been used for sensitive mono- or oligoelemental ultratrace analysis (and precise determination of isotopic ratios) in solid samples. The limits of detection for chemical elements using these mass spectrometric techniques are in the low ng g -1 concentration range. The quantification of the analytical results of mass spectrometric methods is sometimes difficult due to a lack of matrix-fitted multielement standard reference materials (SRMs) for many solid samples. Therefore, owing to the simple quantification procedure of the aqueous solution, inductively coupled plasma mass spectrometry (ICP-MS) is being increasingly used for the characterization of solid samples after sample dissolution. ICP-MS is often combined with special sample introduction equipment (e.g. flow injection, hydride generation, high performance liquid chromatography (HPLC) or electrothermal vaporization) or an off-line matrix separation and enrichment of trace impurities (especially for characterization of high-purity materials and environmental samples) is used in order to improve the detection limits of trace elements. Furthermore, the determination of chemical elements in the trace and ultratrace concentration range is often difficult and can be disturbed through mass interferences of analyte ions by molecular ions at the same nominal mass. By applying double-focusing sector field mass spectrometry at the required mass resolution—by the mass spectrometric separation of molecular ions from the analyte ions—it is often possible to overcome these interference problems. Commercial instrumental equipment, the capability (detection limits, accuracy, precision) and the analytical application fields of mass spectrometric methods for the determination of trace and ultratrace elements and for surface analysis are discussed.

  7. The Students Decision Making in Solving Discount Problem

    ERIC Educational Resources Information Center

    Abdillah; Nusantara, Toto; Subanji; Susanto, Hery; Abadyo

    2016-01-01

    This research is reviewing students' process of decision making intuitively, analytically, and interactively. The research done by using discount problem which specially created to explore student's intuition, analytically, and interactively. In solving discount problems, researcher exploring student's decision in determining their attitude which…

  8. Implementation of a state-to-state analytical framework for the calculation of expansion tube flow properties

    NASA Astrophysics Data System (ADS)

    James, C. M.; Gildfind, D. E.; Lewis, S. W.; Morgan, R. G.; Zander, F.

    2018-03-01

    Expansion tubes are an important type of test facility for the study of planetary entry flow-fields, being the only type of impulse facility capable of simulating the aerothermodynamics of superorbital planetary entry conditions from 10 to 20 km/s. However, the complex flow processes involved in expansion tube operation make it difficult to fully characterise flow conditions, with two-dimensional full facility computational fluid dynamics simulations often requiring tens or hundreds of thousands of computational hours to complete. In an attempt to simplify this problem and provide a rapid flow condition prediction tool, this paper presents a validated and comprehensive analytical framework for the simulation of an expansion tube facility. It identifies central flow processes and models them from state to state through the facility using established compressible and isentropic flow relations, and equilibrium and frozen chemistry. How the model simulates each section of an expansion tube is discussed, as well as how the model can be used to simulate situations where flow conditions diverge from ideal theory. The model is then validated against experimental data from the X2 expansion tube at the University of Queensland.

  9. Artificial neural network and classical least-squares methods for neurotransmitter mixture analysis.

    PubMed

    Schulze, H G; Greek, L S; Gorzalka, B B; Bree, A V; Blades, M W; Turner, R F

    1995-02-01

    Identification of individual components in biological mixtures can be a difficult problem regardless of the analytical method employed. In this work, Raman spectroscopy was chosen as a prototype analytical method due to its inherent versatility and applicability to aqueous media, making it useful for the study of biological samples. Artificial neural networks (ANNs) and the classical least-squares (CLS) method were used to identify and quantify the Raman spectra of the small-molecule neurotransmitters and mixtures of such molecules. The transfer functions used by a network, as well as the architecture of a network, played an important role in the ability of the network to identify the Raman spectra of individual neurotransmitters and the Raman spectra of neurotransmitter mixtures. Specifically, networks using sigmoid and hyperbolic tangent transfer functions generalized better from the mixtures in the training data set to those in the testing data sets than networks using sine functions. Networks with connections that permit the local processing of inputs generally performed better than other networks on all the testing data sets. and better than the CLS method of curve fitting, on novel spectra of some neurotransmitters. The CLS method was found to perform well on noisy, shifted, and difference spectra.

  10. Study of a two-dimension transient heat propagation in cylindrical coordinates by means of two finite difference methods

    NASA Astrophysics Data System (ADS)

    Dumencu, A.; Horbaniuc, B.; Dumitraşcu, G.

    2016-08-01

    The analytical approach of unsteady conduction heat transfer under actual conditions represent a very difficult (if not insurmountable) problem due to the issues related to finding analytical solutions for the conduction heat transfer equation. Various techniques have been developed in order to overcome these difficulties, among which the alternate directions method and the decomposition method. Both of them are particularly suited for two-dimension heat propagation. The paper deals with both techniques in order to verify whether the results provided are in good accordance. The studied case consists of a long hollow cylinder, and considers that the time-dependent temperature field varies both in the radial and the axial directions. The implicit technique is used in both methods and involves the simultaneous solving of a set of equations for all of the nodes for each time step successively for each of the two directions. Gauss elimination is used to obtain the solution of the set, representing the nodal temperatures. After using the two techniques the results show a very good agreement, and since the decomposition is easier to use in terms of computer code and running time, this technique seems to be more recommendable.

  11. MemAxes: Visualization and Analytics for Characterizing Complex Memory Performance Behaviors.

    PubMed

    Gimenez, Alfredo; Gamblin, Todd; Jusufi, Ilir; Bhatele, Abhinav; Schulz, Martin; Bremer, Peer-Timo; Hamann, Bernd

    2018-07-01

    Memory performance is often a major bottleneck for high-performance computing (HPC) applications. Deepening memory hierarchies, complex memory management, and non-uniform access times have made memory performance behavior difficult to characterize, and users require novel, sophisticated tools to analyze and optimize this aspect of their codes. Existing tools target only specific factors of memory performance, such as hardware layout, allocations, or access instructions. However, today's tools do not suffice to characterize the complex relationships between these factors. Further, they require advanced expertise to be used effectively. We present MemAxes, a tool based on a novel approach for analytic-driven visualization of memory performance data. MemAxes uniquely allows users to analyze the different aspects related to memory performance by providing multiple visual contexts for a centralized dataset. We define mappings of sampled memory access data to new and existing visual metaphors, each of which enabling a user to perform different analysis tasks. We present methods to guide user interaction by scoring subsets of the data based on known performance problems. This scoring is used to provide visual cues and automatically extract clusters of interest. We designed MemAxes in collaboration with experts in HPC and demonstrate its effectiveness in case studies.

  12. U-Access: a web-based system for routing pedestrians of differing abilities

    NASA Astrophysics Data System (ADS)

    Sobek, Adam D.; Miller, Harvey J.

    2006-09-01

    For most people, traveling through urban and built environments is straightforward. However, for people with physical disabilities, even a short trip can be difficult and perhaps impossible. This paper provides the design and implementation of a web-based system for the routing and prescriptive analysis of pedestrians with different physical abilities within built environments. U-Access, as a routing tool, provides pedestrians with the shortest feasible route with respect to one of three differing ability levels, namely, peripatetic (unaided mobility), aided mobility (mobility with the help of a cane, walker or crutches) and wheelchair users. U-Access is also an analytical tool that can help identify obstacles in built environments that create routing discrepancies among pedestrians with different physical abilities. This paper discusses the system design, including database, algorithm and interface specifications, and technologies for efficiently delivering results through the World Wide Web (WWW). This paper also provides an illustrative example of a routing problem and an analytical evaluation of the existing infrastructure which identifies the obstacles that pose the greatest discrepancies between physical ability levels. U-Access was evaluated by wheelchair users and route experts from the Center for Disability Services at The University of Utah, USA.

  13. Difficult Temperament Moderates Links between Maternal Responsiveness and Children’s Compliance and Behavior Problems in Low-Income Families

    PubMed Central

    Kochanska, Grazyna; Kim, Sanghag

    2012-01-01

    Background Research has shown that interactions between young children’s temperament and the quality of care they receive predict the emergence of positive and negative socioemotional developmental outcomes. This multi-method study addresses such interactions, using observed and mother-rated measures of difficult temperament, children’s committed, self-regulated compliance and externalizing problems, and mothers’ responsiveness in a low-income sample. Methods In 186 30-month-old children, difficult temperament was observed in the laboratory (as poor effortful control and high anger proneness), and rated by mothers. Mothers’ responsiveness was observed in lengthy naturalistic interactions at 30 and 33 months. At 40 months, children’s committed compliance and externalizing behavior problems were assessed using observations and several well-established maternal report instruments. Results Parallel significant interactions between child difficult temperament and maternal responsiveness were found across both observed and mother-rated measures of temperament. For difficult children, responsiveness had a significant effect such that those children were more compliant and had fewer externalizing problems when they received responsive care, but were less compliant and had more behavior problems when they received unresponsive care. For children with easy temperaments, maternal responsiveness and developmental outcomes were unrelated. All significant interactions reflected the diathesis-stress model. There was no evidence of differential susceptibility, perhaps due to the pervasive stress present in the ecology of the studied families. Conclusions Those findings add to the growing body of evidence that for temperamentally difficult children, unresponsive parenting exacerbates risks for behavior problems, but responsive parenting can effectively buffer risks conferred by temperament. PMID:23057713

  14. A Research Methodology for Studying What Makes Some Problems Difficult to Solve

    ERIC Educational Resources Information Center

    Gulacar, Ozcan; Fynewever, Herb

    2010-01-01

    We present a quantitative model for predicting the level of difficulty subjects will experience with specific problems. The model explicitly accounts for the number of subproblems a problem can be broken into and the difficultly of each subproblem. Although the model builds on previously published models, it is uniquely suited for blending with…

  15. Analytic semigroups: Applications to inverse problems for flexible structures

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Rebnord, D. A.

    1990-01-01

    Convergence and stability results for least squares inverse problems involving systems described by analytic semigroups are presented. The practical importance of these results is demonstrated by application to several examples from problems of estimation of material parameters in flexible structures using accelerometer data.

  16. Application of an integrated multi-criteria decision making AHP-TOPSIS methodology for ETL software selection.

    PubMed

    Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik

    2016-01-01

    Actually, a set of ETL software (Extract, Transform and Load) is available to constitute a major investment market. Each ETL uses its own techniques for extracting, transforming and loading data into data warehouse, which makes the task of evaluating ETL software very difficult. However, choosing the right software of ETL is critical to the success or failure of any Business Intelligence project. As there are many impacting factors in the selection of ETL software, the same process is considered as a complex multi-criteria decision making (MCDM) problem. In this study, an application of decision-making methodology that employs the two well-known MCDM techniques, namely Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods is designed. In this respect, the aim of using AHP is to analyze the structure of the ETL software selection problem and obtain weights of the selected criteria. Then, TOPSIS technique is used to calculate the alternatives' ratings. An example is given to illustrate the proposed methodology. Finally, a software prototype for demonstrating both methods is implemented.

  17. Classification without labels: learning from mixed samples in high energy physics

    NASA Astrophysics Data System (ADS)

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    2017-10-01

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimal classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.

  18. An introduction to the physics of high energy accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, D.A.; Syphers, J.J.

    1993-01-01

    This book is an outgrowth of a course given by the authors at various universities and particle accelerator schools. It starts from the basic physics principles governing particle motion inside an accelerator, and leads to a full description of the complicated phenomena and analytical tools encountered in the design and operation of a working accelerator. The book covers acceleration and longitudinal beam dynamics, transverse motion and nonlinear perturbations, intensity dependent effects, emittance preservation methods and synchrotron radiation. These subjects encompass the core concerns of a high energy synchrotron. The authors apparently do not assume the reader has much previous knowledgemore » about accelerator physics. Hence, they take great care to introduce the physical phenomena encountered and the concepts used to describe them. The mathematical formulae and derivations are deliberately kept at a level suitable for beginners. After mastering this course, any interested reader will not find it difficult to follow subjects of more current interests. Useful homework problems are provided at the end of each chapter. Many of the problems are based on actual activities associated with the design and operation of existing accelerators.« less

  19. Classification without labels: learning from mixed samples in high energy physics

    DOE PAGES

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    2017-10-25

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimalmore » classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.« less

  20. Classification without labels: learning from mixed samples in high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimalmore » classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.« less

  1. Systematic Error in Seed Plant Phylogenomics

    PubMed Central

    Zhong, Bojian; Deusch, Oliver; Goremykin, Vadim V.; Penny, David; Biggs, Patrick J.; Atherton, Robin A.; Nikiforova, Svetlana V.; Lockhart, Peter James

    2011-01-01

    Resolving the closest relatives of Gnetales has been an enigmatic problem in seed plant phylogeny. The problem is known to be difficult because of the extent of divergence between this diverse group of gymnosperms and their closest phylogenetic relatives. Here, we investigate the evolutionary properties of conifer chloroplast DNA sequences. To improve taxon sampling of Cupressophyta (non-Pinaceae conifers), we report sequences from three new chloroplast (cp) genomes of Southern Hemisphere conifers. We have applied a site pattern sorting criterion to study compositional heterogeneity, heterotachy, and the fit of conifer chloroplast genome sequences to a general time reversible + G substitution model. We show that non-time reversible properties of aligned sequence positions in the chloroplast genomes of Gnetales mislead phylogenetic reconstruction of these seed plants. When 2,250 of the most varied sites in our concatenated alignment are excluded, phylogenetic analyses favor a close evolutionary relationship between the Gnetales and Pinaceae—the Gnepine hypothesis. Our analytical protocol provides a useful approach for evaluating the robustness of phylogenomic inferences. Our findings highlight the importance of goodness of fit between substitution model and data for understanding seed plant phylogeny. PMID:22016337

  2. Synthetic Nano- and Micromachines in Analytical Chemistry: Sensing, Migration, Capture, Delivery, and Separation.

    PubMed

    Duan, Wentao; Wang, Wei; Das, Sambeeta; Yadav, Vinita; Mallouk, Thomas E; Sen, Ayusman

    2015-01-01

    Synthetic nano- and microscale machines move autonomously in solution or drive fluid flows by converting sources of energy into mechanical work. Their sizes are comparable to analytes (sub-nano- to microscale), and they respond to signals from each other and their surroundings, leading to emergent collective behavior. These machines can potentially enable hitherto difficult analytical applications. In this article, we review the development of different classes of synthetic nano- and micromotors and pumps and indicate their possible applications in real-time in situ chemical sensing, on-demand directional transport, cargo capture and delivery, as well as analyte isolation and separation.

  3. Projective identification, self-disclosure, and the patient's view of the object: the need for flexibility.

    PubMed

    Waska, R T

    1999-01-01

    Certain patients, through projective identification and splitting mechanisms, test the boundaries of the analytic situation. These patients are usually experiencing overwhelming paranoid-schizoid anxieties and view the object as ruthless and persecutory. Using a Kleinian perspective, the author advocates greater analytic flexibility with these difficult patients who seem unable to use the standard analytic environment. The concept of self-disclosure is examined, and the author discusses certain technical situations where self-disclosure may be helpful. (The Journal of Psychotherapy Practice and Research 1999; 8:225-233)

  4. Food Adulteration in Switzerland: From 'Ravioli' over 'Springbok' to 'Disco Sushi'.

    PubMed

    Hubner, Philipp

    2016-01-01

    The driving force behind food adulteration is monetary profit and this has remained unchanged for at least the last hundred years. Food adulterations were and still are difficult to uncover because they occur mostly in an unpredictable and unexpected way. Very often food falsifiers take advantage of modern technology in such a way that food adulterations are difficult or sometimes even impossible to detect. Targets for food adulteration were and still are highly priced food items such as spirits, meat, seafood and olive oil. Although difficult to detect, food adulterations were in the past strong driving forces for the development of adequate detection methods in the official food control laboratories and for the enforcement of the food law. A very prominent example in this context is the 'Ravioli scandal' in Switzerland in the late 1970s which showed that cheap second-class meat could be processed into products without being discovered for long time. As a consequence the official food control laboratories in Switzerland were reinforced with more laboratory equipment and technical staff. With the introduction of new detection principles such as DNA-based analytical methods new kinds of food adulteration could and can be uncovered. Analytical methods have their limits and in some cases of food fraud there are no analytical means to detect them. In such cases the examination of trade by checking of accounts is the method of choice.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Heinrich, R.R.; Graczyk, D.G.

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for fiscal year 1988 (October 1987 through September 1988). The Analytical Chemistry Laboratory is a full-cost recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. In addition, the ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routinemore » standard analyses to unique problems that require significant development of methods and techniques.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Heinrich, R.R.; Graczyk, D.G.

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year 1989 (October 1988 through September 1989). The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. In addition, the ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routine standardmore » analyses to unique problems that require significant development of methods and techniques.« less

  7. Analytical-numerical solution of a nonlinear integrodifferential equation in econometrics

    NASA Astrophysics Data System (ADS)

    Kakhktsyan, V. M.; Khachatryan, A. Kh.

    2013-07-01

    A mixed problem for a nonlinear integrodifferential equation arising in econometrics is considered. An analytical-numerical method is proposed for solving the problem. Some numerical results are presented.

  8. Recent Transonic Flutter Investigations for Wings and External Stores

    DTIC Science & Technology

    1983-01-01

    and difficult method? In the early days of high-speed air- craft design . the aeroelastician realized that non -compressible aerodynamic theory and... experimental aeroelastic model program that would provide insight into the effects of Reynolds number and angle of attack on various airfoil designs regarding...investigation is carried out both experimentally and analytically. The analytic modelling will be described in a later section. The flutter calculations

  9. Use of a Mathematics Word Problem Strategy to Improve Achievement for Students with Mild Disabilities

    ERIC Educational Resources Information Center

    Taber, Mary R.

    2013-01-01

    Mathematics can be a difficult topic both to teach and to learn. Word problems specifically can be difficult for students with disabilities because they have to conceptualize what the problem is asking for, and they must perform the correct operation accurately. Current trends in mathematics instruction stem from the National Council of Teachers…

  10. New analytical solutions to the two-phase water faucet problem

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-06-17

    Here, the one-dimensional water faucet problem is one of the classical benchmark problems originally proposed by Ransom to study the two-fluid two-phase flow model. With certain simplifications, such as massless gas phase and no wall and interfacial frictions, analytical solutions had been previously obtained for the transient liquid velocity and void fraction distribution. The water faucet problem and its analytical solutions have been widely used for the purposes of code assessment, benchmark and numerical verifications. In our previous study, the Ransom’s solutions were used for the mesh convergence study of a high-resolution spatial discretization scheme. It was found that, atmore » the steady state, an anticipated second-order spatial accuracy could not be achieved, when compared to the existing Ransom’s analytical solutions. A further investigation showed that the existing analytical solutions do not actually satisfy the commonly used two-fluid single-pressure two-phase flow equations. In this work, we present a new set of analytical solutions of the water faucet problem at the steady state, considering the gas phase density’s effect on pressure distribution. This new set of analytical solutions are used for mesh convergence studies, from which anticipated second-order of accuracy is achieved for the 2nd order spatial discretization scheme. In addition, extended Ransom’s transient solutions for the gas phase velocity and pressure are derived, with the assumption of decoupled liquid and gas pressures. Numerical verifications on the extended Ransom’s solutions are also presented.« less

  11. A stochastic method for Brownian-like optical transport calculations in anisotropic biosuspensions and blood

    NASA Astrophysics Data System (ADS)

    Miller, Steven

    1998-03-01

    A generic stochastic method is presented that rapidly evaluates numerical bulk flux solutions to the one-dimensional integrodifferential radiative transport equation, for coherent irradiance of optically anisotropic suspensions of nonspheroidal bioparticles, such as blood. As Fermat rays or geodesics enter the suspension, they evolve into a bundle of random paths or trajectories due to scattering by the suspended bioparticles. Overall, this can be interpreted as a bundle of Markov trajectories traced out by a "gas" of Brownian-like point photons being scattered and absorbed by the homogeneous distribution of uncorrelated cells in suspension. By considering the cumulative vectorial intersections of a statistical bundle of random trajectories through sets of interior data planes in the space containing the medium, the effective equivalent information content and behavior of the (generally unknown) analytical flux solutions of the radiative transfer equation rapidly emerges. The fluxes match the analytical diffuse flux solutions in the diffusion limit, which verifies the accuracy of the algorithm. The method is not constrained by the diffusion limit and gives correct solutions for conditions where diffuse solutions are not viable. Unlike conventional Monte Carlo and numerical techniques adapted from neutron transport or nuclear reactor problems that compute scalar quantities, this vectorial technique is fast, easily implemented, adaptable, and viable for a wide class of biophotonic scenarios. By comparison, other analytical or numerical techniques generally become unwieldy, lack viability, or are more difficult to utilize and adapt. Illustrative calculations are presented for blood medias at monochromatic wavelengths in the visible spectrum.

  12. An overview of gamma-hydroxybutyric acid: pharmacodynamics, pharmacokinetics, toxic effects, addiction, analytical methods, and interpretation of results.

    PubMed

    Andresen, H; Aydin, B E; Mueller, A; Iwersen-Bergmann, S

    2011-09-01

    Abuse of gamma-hydroxybutyric acid (GHB) has been known since the early 1990's, but is not as widespread as the consumption of other illegal drugs. However, the number of severe intoxications with fatal outcomes is comparatively high; not the least of which is brought about by the consumption of the currently legal precursor substances gamma-butyrolactone (GBL) and 1,4-butanediol (1,4-BD). In regards to previous assumptions, addiction to GHB or its analogues can occur with severe symptoms of withdrawal. Moreover, GHB can be used for drug-facilitated sexual assaults. Its pharmacological effects are generated mainly by interaction with both GABA(B) and GHB receptors, as well as its influence on other transmitter systems in the human brain. Numerous analytical methods for determining GHB using chromatographic techniques were published in recent years, and an enzymatic screening method was established. However, the short window of GHB detection in blood or urine due to its rapid metabolism is a challenge. Furthermore, despite several studies addressing this problem, evaluation of analytical results can be difficult: GHB is a metabolite of GABA (gamma-aminobutyric acid); a differentiation between endogenous and exogenous concentrations has to be made. Apart from this, in samples with a longer storage interval and especially in postmortem specimens, higher levels can be measured due to GHB generation during this postmortem interval or storage time. Copyright © 2011 John Wiley & Sons, Ltd.

  13. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.

    2016-01-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230

  14. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C

    2015-03-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.

  15. From environmental to ecological ethics: toward a practical ethics for ecologists and conservationists.

    PubMed

    Minteer, Ben A; Collins, James P

    2008-12-01

    Ecological research and conservation practice frequently raise difficult and varied ethical questions for scientific investigators and managers, including duties to public welfare, nonhuman individuals (i.e., animals and plants), populations, and ecosystems. The field of environmental ethics has contributed much to the understanding of general duties and values to nature, but it has not developed the resources to address the diverse and often unique practical concerns of ecological researchers and managers in the field, lab, and conservation facility. The emerging field of "ecological ethics" is a practical or scientific ethics that offers a superior approach to the ethical dilemmas of the ecologist and conservation manager. Even though ecological ethics necessarily draws from the principles and commitments of mainstream environmental ethics, it is normatively pluralistic, including as well the frameworks of animal, research, and professional ethics. It is also methodologically pragmatic, focused on the practical problems of researchers and managers and informed by these problems in turn. The ecological ethics model offers environmental scientists and practitioners a useful analytical tool for identifying, clarifying, and harmonizing values and positions in challenging ecological research and management situations. Just as bioethics provides a critical intellectual and problem-solving service to the biomedical community, ecological ethics can help inform and improve ethical decision making in the ecology and conservation communities.

  16. The Efficacy of Problem-Based Learning in an Analytical Laboratory Course for Pre-Service Chemistry Teachers

    ERIC Educational Resources Information Center

    Yoon, Heojeong; Woo, Ae Ja; Treagust, David; Chandrasegaran, A. L.

    2014-01-01

    The efficacy of problem-based learning (PBL) in an analytical chemistry laboratory course was studied using a programme that was designed and implemented with 20 students in a treatment group over 10 weeks. Data from 26 students in a traditional analytical chemistry laboratory course were used for comparison. Differences in the creative thinking…

  17. An Eye Tracking Study of High- and Low-Performing Students in Solving Interactive and Analytical Problems

    ERIC Educational Resources Information Center

    Hu, Yiling; Wu, Bian; Gu, Xiaoqing

    2017-01-01

    Test results from the Program for International Student Assessment (PISA) reveal that Shanghai students performed less well in solving interactive problems (those that require uncovering necessary information) than in solving analytical problems (those having all information disclosed at the outset). Accordingly, this study investigates…

  18. Unifying Approach to Analytical Chemistry and Chemical Analysis: Problem-Oriented Role of Chemical Analysis.

    ERIC Educational Resources Information Center

    Pardue, Harry L.; Woo, Jannie

    1984-01-01

    Proposes an approach to teaching analytical chemistry and chemical analysis in which a problem to be resolved is the focus of a course. Indicates that this problem-oriented approach is intended to complement detailed discussions of fundamental and applied aspects of chemical determinations and not replace such discussions. (JN)

  19. Analytical Derivation: An Epistemic Game for Solving Mathematically Based Physics Problems

    ERIC Educational Resources Information Center

    Bajracharya, Rabindra R.; Thompson, John R.

    2016-01-01

    Problem solving, which often involves multiple steps, is an integral part of physics learning and teaching. Using the perspective of the epistemic game, we documented a specific game that is commonly pursued by students while solving mathematically based physics problems: the "analytical derivation" game. This game involves deriving an…

  20. Convergence analysis of two-node CMFD method for two-group neutron diffusion eigenvalue problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Yongjin; Park, Jinsu; Lee, Hyun Chul

    2015-12-01

    In this paper, the nonlinear coarse-mesh finite difference method with two-node local problem (CMFD2N) is proven to be unconditionally stable for neutron diffusion eigenvalue problems. The explicit current correction factor (CCF) is derived based on the two-node analytic nodal method (ANM2N), and a Fourier stability analysis is applied to the linearized algorithm. It is shown that the analytic convergence rate obtained by the Fourier analysis compares very well with the numerically measured convergence rate. It is also shown that the theoretical convergence rate is only governed by the converged second harmonic buckling and the mesh size. It is also notedmore » that the convergence rate of the CCF of the CMFD2N algorithm is dependent on the mesh size, but not on the total problem size. This is contrary to expectation for eigenvalue problem. The novel points of this paper are the analytical derivation of the convergence rate of the CMFD2N algorithm for eigenvalue problem, and the convergence analysis based on the analytic derivations.« less

  1. New trends in astrodynamics and applications: optimal trajectories for space guidance.

    PubMed

    Azimov, Dilmurat; Bishop, Robert

    2005-12-01

    This paper represents recent results on the development of optimal analytic solutions to the variation problem of trajectory optimization and their application in the construction of on-board guidance laws. The importance of employing the analytically integrated trajectories in a mission design is discussed. It is assumed that the spacecraft is equipped with a power-limited propulsion and moving in a central Newtonian field. Satisfaction of the necessary and sufficient conditions for optimality of trajectories is analyzed. All possible thrust arcs and corresponding classes of the analytical solutions are classified based on the propulsion system parameters and performance index of the problem. The solutions are presented in a form convenient for applications in escape, capture, and interorbital transfer problems. Optimal guidance and neighboring optimal guidance problems are considered. It is shown that the analytic solutions can be used as reference trajectories in constructing the guidance algorithms for the maneuver problems mentioned above. An illustrative example of a spiral trajectory that terminates on a given elliptical parking orbit is discussed.

  2. Difficult incidents and tutor interventions in problem-based learning tutorials.

    PubMed

    Kindler, Pawel; Grant, Christopher; Kulla, Steven; Poole, Gary; Godolphin, William

    2009-09-01

    Tutors report difficult incidents and distressing conflicts that adversely affect learning in their problem-based learning (PBL) groups. Faculty development (training) and peer support should help them to manage this. Yet our understanding of these problems and how to deal with them often seems inadequate to help tutors. The aim of this study was to categorise difficult incidents and the interventions that skilled tutors used in response, and to determine the effectiveness of those responses. Thirty experienced and highly rated tutors in our Year 1 and 2 medical curriculum took part in semi-structured interviews to: identify and describe difficult incidents; describe how they responded, and assess the success of each response. Recorded and transcribed data were analysed thematically to develop typologies of difficult incidents and interventions and compare reported success or failure. The 94 reported difficult incidents belonged to the broad categories 'individual student' or 'group dynamics'. Tutors described 142 interventions in response to these difficult incidents, categorised as: (i) tutor intervenes during tutorial; (ii) tutor gives feedback outside tutorial, or (iii) student or group intervenes. Incidents in the 'individual student' category were addressed relatively unsuccessfully (effective < 50% of the time) by response (i), but with moderate success by response (ii) and successfully (> 75% of the time) by response (iii). None of the interventions worked well when used in response to problems related to 'group dynamics'. Overall, 59% of the difficult incidents were dealt with successfully. Dysfunctional PBL groups can be highly challenging, even for experienced and skilled tutors. Within-tutorial feedback, the treatment that tutors are most frequently advised to apply, was often not effective. Our study suggests that the collective responsibility of the group, rather than of the tutor, to deal with these difficulties should be emphasised.

  3. Approximate Solution to the Angular Speeds of a Nearly-Symmetric Mass-Varying Cylindrical Body

    NASA Astrophysics Data System (ADS)

    Nanjangud, Angadh; Eke, Fidelis

    2017-06-01

    This paper examines the rotational motion of a nearly axisymmetric rocket type system with uniform burn of its propellant. The asymmetry comes from a slight difference in the transverse principal moments of inertia of the system, which then results in a set of nonlinear equations of motion even when no external torque is applied to the system. It is often difficult, or even impossible, to generate analytic solutions for such equations; closed form solutions are even more difficult to obtain. In this paper, a perturbation-based approach is employed to linearize the equations of motion and generate analytic solutions. The solutions for the variables of transverse motion are analytic and a closed-form solution to the spin rate is suggested. The solutions are presented in a compact form that permits rapid computation. The approximate solutions are then applied to the torque-free motion of a typical solid rocket system and the results are found to agree with those obtained from the numerical solution of the full non-linear equations of motion of the mass varying system.

  4. Does Incubation Enhance Problem Solving? A Meta-Analytic Review

    ERIC Educational Resources Information Center

    Sio, Ut Na; Ormerod, Thomas C.

    2009-01-01

    A meta-analytic review of empirical studies that have investigated incubation effects on problem solving is reported. Although some researchers have reported increased solution rates after an incubation period (i.e., a period of time in which a problem is set aside prior to further attempts to solve), others have failed to find effects. The…

  5. Modeling visual problem solving as analogical reasoning.

    PubMed

    Lovett, Andrew; Forbus, Kenneth

    2017-01-01

    We present a computational model of visual problem solving, designed to solve problems from the Raven's Progressive Matrices intelligence test. The model builds on the claim that analogical reasoning lies at the heart of visual problem solving, and intelligence more broadly. Images are compared via structure mapping, aligning the common relational structure in 2 images to identify commonalities and differences. These commonalities or differences can themselves be reified and used as the input for future comparisons. When images fail to align, the model dynamically rerepresents them to facilitate the comparison. In our analysis, we find that the model matches adult human performance on the Standard Progressive Matrices test, and that problems which are difficult for the model are also difficult for people. Furthermore, we show that model operations involving abstraction and rerepresentation are particularly difficult for people, suggesting that these operations may be critical for performing visual problem solving, and reasoning more generally, at the highest level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Projective Identification, Self-Disclosure, and the Patient's View of the Object: The Need for Flexibility

    PubMed Central

    Waska, Robert T.

    1999-01-01

    Certain patients, through projective identification and splitting mechanisms, test the boundaries of the analytic situation. These patients are usually experiencing overwhelming paranoid-schizoid anxieties and view the object as ruthless and persecutory. Using a Kleinian perspective, the author advocates greater analytic flexibility with these difficult patients who seem unable to use the standard analytic environment. The concept of self-disclosure is examined, and the author discusses certain technical situations where self-disclosure may be helpful.(The Journal of Psychotherapy Practice and Research 1999; 8:225–233) PMID:10413442

  7. Analytical derivation: An epistemic game for solving mathematically based physics problems

    NASA Astrophysics Data System (ADS)

    Bajracharya, Rabindra R.; Thompson, John R.

    2016-06-01

    Problem solving, which often involves multiple steps, is an integral part of physics learning and teaching. Using the perspective of the epistemic game, we documented a specific game that is commonly pursued by students while solving mathematically based physics problems: the analytical derivation game. This game involves deriving an equation through symbolic manipulations and routine mathematical operations, usually without any physical interpretation of the processes. This game often creates cognitive obstacles in students, preventing them from using alternative resources or better approaches during problem solving. We conducted hour-long, semi-structured, individual interviews with fourteen introductory physics students. Students were asked to solve four "pseudophysics" problems containing algebraic and graphical representations. The problems required the application of the fundamental theorem of calculus (FTC), which is one of the most frequently used mathematical concepts in physics problem solving. We show that the analytical derivation game is necessary, but not sufficient, to solve mathematically based physics problems, specifically those involving graphical representations.

  8. Generalized bipartite quantum state discrimination problems with sequential measurements

    NASA Astrophysics Data System (ADS)

    Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki

    2018-02-01

    We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.

  9. Integrating numerical computation into the undergraduate education physics curriculum using spreadsheet excel

    NASA Astrophysics Data System (ADS)

    Fauzi, Ahmad

    2017-11-01

    Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.

  10. An Updated Review of Meat Authenticity Methods and Applications.

    PubMed

    Vlachos, Antonios; Arvanitoyannis, Ioannis S; Tserkezou, Persefoni

    2016-05-18

    Adulteration of foods is a serious economic problem concerning most foodstuffs, and in particular meat products. Since high-priced meat demand premium prices, producers of meat-based products might be tempted to blend these products with lower cost meat. Moreover, the labeled meat contents may not be met. Both types of adulteration are difficult to detect and lead to deterioration of product quality. For the consumer, it is of outmost importance to guarantee both authenticity and compliance with product labeling. The purpose of this article is to review the state of the art of meat authenticity with analytical and immunochemical methods with the focus on the issue of geographic origin and sensory characteristics. This review is also intended to provide an overview of the various currently applied statistical analyses (multivariate analysis (MAV), such as principal component analysis, discriminant analysis, cluster analysis, etc.) and their effectiveness for meat authenticity.

  11. From the physics of interacting polymers to optimizing routes on the London Underground

    PubMed Central

    Yeung, Chi Ho; Saad, David; Wong, K. Y. Michael

    2013-01-01

    Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise. PMID:23898198

  12. From the physics of interacting polymers to optimizing routes on the London Underground.

    PubMed

    Yeung, Chi Ho; Saad, David; Wong, K Y Michael

    2013-08-20

    Optimizing paths on networks is crucial for many applications, ranging from subway traffic to Internet communication. Because global path optimization that takes account of all path choices simultaneously is computationally hard, most existing routing algorithms optimize paths individually, thus providing suboptimal solutions. We use the physics of interacting polymers and disordered systems to analyze macroscopic properties of generic path optimization problems and derive a simple, principled, generic, and distributed routing algorithm capable of considering all individual path choices simultaneously. We demonstrate the efficacy of the algorithm by applying it to: (i) random graphs resembling Internet overlay networks, (ii) travel on the London Underground network based on Oyster card data, and (iii) the global airport network. Analytically derived macroscopic properties give rise to insightful new routing phenomena, including phase transitions and scaling laws, that facilitate better understanding of the appropriate operational regimes and their limitations, which are difficult to obtain otherwise.

  13. Elements of an algorithm for optimizing a parameter-structural neural network

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2016-06-01

    The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.

  14. Progress toward openness, transparency, and reproducibility in cognitive neuroscience.

    PubMed

    Gilmore, Rick O; Diaz, Michele T; Wyble, Brad A; Yarkoni, Tal

    2017-05-01

    Accumulating evidence suggests that many findings in psychological science and cognitive neuroscience may prove difficult to reproduce; statistical power in brain imaging studies is low and has not improved recently; software errors in analysis tools are common and can go undetected for many years; and, a few large-scale studies notwithstanding, open sharing of data, code, and materials remain the rare exception. At the same time, there is a renewed focus on reproducibility, transparency, and openness as essential core values in cognitive neuroscience. The emergence and rapid growth of data archives, meta-analytic tools, software pipelines, and research groups devoted to improved methodology reflect this new sensibility. We review evidence that the field has begun to embrace new open research practices and illustrate how these can begin to address problems of reproducibility, statistical power, and transparency in ways that will ultimately accelerate discovery. © 2017 New York Academy of Sciences.

  15. Computational efficiency of parallel combinatorial OR-tree searches

    NASA Technical Reports Server (NTRS)

    Li, Guo-Jie; Wah, Benjamin W.

    1990-01-01

    The performance of parallel combinatorial OR-tree searches is analytically evaluated. This performance depends on the complexity of the problem to be solved, the error allowance function, the dominance relation, and the search strategies. The exact performance may be difficult to predict due to the nondeterminism and anomalies of parallelism. The authors derive the performance bounds of parallel OR-tree searches with respect to the best-first, depth-first, and breadth-first strategies, and verify these bounds by simulation. They show that a near-linear speedup can be achieved with respect to a large number of processors for parallel OR-tree searches. Using the bounds developed, the authors derive sufficient conditions for assuring that parallelism will not degrade performance and necessary conditions for allowing parallelism to have a speedup greater than the ratio of the numbers of processors. These bounds and conditions provide the theoretical foundation for determining the number of processors required to assure a near-linear speedup.

  16. Application of statistical classification methods for predicting the acceptability of well-water quality

    NASA Astrophysics Data System (ADS)

    Cameron, Enrico; Pilla, Giorgio; Stella, Fabio A.

    2018-06-01

    The application of statistical classification methods is investigated—in comparison also to spatial interpolation methods—for predicting the acceptability of well-water quality in a situation where an effective quantitative model of the hydrogeological system under consideration cannot be developed. In the example area in northern Italy, in particular, the aquifer is locally affected by saline water and the concentration of chloride is the main indicator of both saltwater occurrence and groundwater quality. The goal is to predict if the chloride concentration in a water well will exceed the allowable concentration so that the water is unfit for the intended use. A statistical classification algorithm achieved the best predictive performances and the results of the study show that statistical classification methods provide further tools for dealing with groundwater quality problems concerning hydrogeological systems that are too difficult to describe analytically or to simulate effectively.

  17. Suppressing spectral diffusion of emitted photons with optical pulses

    DOE PAGES

    Fotso, H. F.; Feiguin, A. E.; Awschalom, D. D.; ...

    2016-01-22

    In many quantum architectures the solid-state qubits, such as quantum dots or color centers, are interfaced via emitted photons. However, the frequency of photons emitted by solid-state systems exhibits slow uncontrollable fluctuations over time (spectral diffusion), creating a serious problem for implementation of the photon-mediated protocols. Here we show that a sequence of optical pulses applied to the solid-state emitter can stabilize the emission line at the desired frequency. We demonstrate efficiency, robustness, and feasibility of the method analytically and numerically. Taking nitrogen-vacancy center in diamond as an example, we show that only several pulses, with the width of 1more » ns, separated by few ns (which is not difficult to achieve) can suppress spectral diffusion. As a result, our method provides a simple and robust way to greatly improve the efficiency of photon-mediated entanglement and/or coupling to photonic cavities for solid-state qubits.« less

  18. On nonlinear thermo-electro-elasticity.

    PubMed

    Mehnert, Markus; Hossain, Mokarram; Steinmann, Paul

    2016-06-01

    Electro-active polymers (EAPs) for large actuations are nowadays well-known and promising candidates for producing sensors, actuators and generators. In general, polymeric materials are sensitive to differential temperature histories. During experimental characterizations of EAPs under electro-mechanically coupled loads, it is difficult to maintain constant temperature not only because of an external differential temperature history but also because of the changes in internal temperature caused by the application of high electric loads. In this contribution, a thermo-electro-mechanically coupled constitutive framework is proposed based on the total energy approach. Departing from relevant laws of thermodynamics, thermodynamically consistent constitutive equations are formulated. To demonstrate the performance of the proposed thermo-electro-mechanically coupled framework, a frequently used non-homogeneous boundary-value problem, i.e. the extension and inflation of a cylindrical tube, is solved analytically. The results illustrate the influence of various thermo-electro-mechanical couplings.

  19. On nonlinear thermo-electro-elasticity

    PubMed Central

    Mehnert, Markus; Hossain, Mokarram

    2016-01-01

    Electro-active polymers (EAPs) for large actuations are nowadays well-known and promising candidates for producing sensors, actuators and generators. In general, polymeric materials are sensitive to differential temperature histories. During experimental characterizations of EAPs under electro-mechanically coupled loads, it is difficult to maintain constant temperature not only because of an external differential temperature history but also because of the changes in internal temperature caused by the application of high electric loads. In this contribution, a thermo-electro-mechanically coupled constitutive framework is proposed based on the total energy approach. Departing from relevant laws of thermodynamics, thermodynamically consistent constitutive equations are formulated. To demonstrate the performance of the proposed thermo-electro-mechanically coupled framework, a frequently used non-homogeneous boundary-value problem, i.e. the extension and inflation of a cylindrical tube, is solved analytically. The results illustrate the influence of various thermo-electro-mechanical couplings. PMID:27436985

  20. High molecular weight non-polar hydrocarbons as pure model substances and in motor oil samples can be ionized without fragmentation by atmospheric pressure chemical ionization mass spectrometry.

    PubMed

    Hourani, Nadim; Kuhnert, Nikolai

    2012-10-15

    High molecular weight non-polar hydrocarbons are still difficult to detect by mass spectrometry. Although several studies have targeted this problem, lack of good self-ionization has limited the ability of mass spectrometry to examine these hydrocarbons. Failure to control ion generation in the atmospheric pressure chemical ionization (APCI) source hampers the detection of intact stable gas-phase ions of non-polar hydrocarbon in mass spectrometry. Seventeen non-volatile non-polar hydrocarbons, reported to be difficult to ionize, were examined by an optimized APCI methodology using nitrogen as the reagent gas. All these analytes were successfully ionized as abundant and intact stable [M-H](+) ions without the use of any derivatization or adduct chemistry and without significant fragmentation. Application of the method to real-life hydrocarbon mixtures like light shredder waste and car motor oil was demonstrated. Despite numerous reports to the contrary, it is possible to ionize high molecular weight non-polar hydrocarbons by APCI, omitting the use of additives. This finding represents a significant step towards extending the applicability of mass spectrometry to non-polar hydrocarbon analyses in crude oil, petrochemical products, waste or food. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Striving toward useful interpretation while managing countertransference enactments: encounters with a thick-skinned narcissistic person.

    PubMed

    Waska, Robert

    2011-09-01

    Narcissistic patients tend to push the analyst to work harder than usual to contain, understand, translate, and utilize their countertransference states. This is because of the unusually extreme reliance on denial, devaluation, projective identification, and control that these individuals exhibit. Defenses against loss, envy, greed, and dependence create difficult transference states in which symbolic or creative material is flattened, stripped, and neutralized. Feelings are out of the question. This clinical paper explores the narcissistic lack of connection to self and other that endures in the transference as well as in all aspects of these patients' lives. With thick-skinned narcissistic patients, there is a subtle lack of engagement, an underbelly of control, and a complete uncoupling of feeling or link between self and object. Envy is often a cornerstone of such difficult clinical problems and is part of an internal desolation that fuels an emotional firebombing of any awareness of interest in self or other. Detailed case material is used to show how confusing, alarming, and demanding such narcissistic patients can be, trying the very essence of the analytic process. They enter treatment looking for help, wanting a quick fix to their suffering, but resist the deeper understanding, learning, and change that psychoanalytic treatment offers.

  2. Microplastic pollution in China's inland water systems: A review of findings, methods, characteristics, effects, and management.

    PubMed

    Zhang, Kai; Shi, Huahong; Peng, Jinping; Wang, Yinghui; Xiong, Xiong; Wu, Chenxi; Lam, Paul K S

    2018-07-15

    The pollution of marine environments and inland waters by plastic debris has raised increasing concerns worldwide in recent years. China is the world's largest developing country and the largest plastic producer. In this review, we gather available information on microplastic pollution in China's inland water systems. The results show that microplastics are ubiquitous in the investigated inland water systems, and high microplastic abundances were observed in developed areas. Although similar sampling and analytical methods were used for microplastic research in inland water and marine systems, methods of investigation should be standardized in the future. The characteristics of the detected microplastics suggest secondary sources as their major sources. The biological and ecological effects of microplastics have been demonstrated, but their risks are difficult to determine at this stage due to the discrepancy between the field-collected microplastics and microplastics used in ecotoxicological studies. Although many laws and regulations have already been established to manage and control plastic waste in China, the implementation of these laws and regulations has been ineffective and sometimes difficult. Several research priorities are identified, and we suggest that the Chinese government should be more proactive in tackling plastic pollution problems to protect the environment and fulfill international responsibilities. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Heinrich, R.R.; Graczyk, D.G.

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year 1991 (October 1990 through September 1991). This is the eighth annual report for the ACL. The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of analytical chemistry support services to the scientific and engineering programs at ANL. In addition, the ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handlesmore » a wide range of analytical problems, from routine standard analyses to unique problems that require significant development of methods and techniques.« less

  4. Re-thinking barriers to organizational change in public hospitals.

    PubMed

    Edwards, Nigel; Saltman, Richard B

    2017-01-01

    Public hospitals are well known to be difficult to reform. This paper provides a comprehensive six-part analytic framework that can help policymakers and managers better shape their organizational and institutional behavior. The paper first describes three separate structural characteristics which, together, inhibit effective problem description and policy design for public hospitals. These three structural constraints are i) the dysfunctional characteristics found in most organizations, ii) the particular dysfunctions of professional health sector organizations, and iii) the additional dysfunctional dimensions of politically managed organizations. While the problems in each of these three dimensions of public hospital organization are well-known, and the first two dimensions clearly affect private as well as publicly run hospitals, insufficient attention has been paid to the combined impact of all three factors in making public hospitals particularly difficult to manage and steer. Further, these three structural dimensions interact in an institutional environment defined by three restrictive context limitations, again two of which also affect private hospitals but all three of which compound the management dilemmas in public hospitals. The first contextual limitation is the inherent complexity of delivering high quality, safe, and affordable modern inpatient care in a hospital setting. The second contextual limitation is a set of specific market failures in public hospitals, which limit the scope of the standard financial incentives and reform measures. The third and last contextual limitation is the unique problem of generalized and localized anxiety , which accompanies the delivery of medical services, and which suffuses decision-making on the part of patients, medical staff, hospital management, and political actors alike. This combination of six institutional characteristics - three structural dimensions and three contextual dimensions - can help explain why public hospitals are different in character from other parts of the public sector, and the scale of the challenge they present to political decision-makers.

  5. Rosetta Structure Prediction as a Tool for Solving Difficult Molecular Replacement Problems.

    PubMed

    DiMaio, Frank

    2017-01-01

    Molecular replacement (MR), a method for solving the crystallographic phase problem using phases derived from a model of the target structure, has proven extremely valuable, accounting for the vast majority of structures solved by X-ray crystallography. However, when the resolution of data is low, or the starting model is very dissimilar to the target protein, solving structures via molecular replacement may be very challenging. In recent years, protein structure prediction methodology has emerged as a powerful tool in model building and model refinement for difficult molecular replacement problems. This chapter describes some of the tools available in Rosetta for model building and model refinement specifically geared toward difficult molecular replacement cases.

  6. Quasi-linear diffusion coefficients for highly oblique whistler mode waves

    NASA Astrophysics Data System (ADS)

    Albert, J. M.

    2017-05-01

    Quasi-linear diffusion coefficients are considered for highly oblique whistler mode waves, which exhibit a singular "resonance cone" in cold plasma theory. The refractive index becomes both very large and rapidly varying as a function of wave parameters, making the diffusion coefficients difficult to calculate and to characterize. Since such waves have been repeatedly observed both outside and inside the plasmasphere, this problem has received renewed attention. Here the diffusion equations are analytically treated in the limit of large refractive index μ. It is shown that a common approximation to the refractive index allows the associated "normalization integral" to be evaluated in closed form and that this can be exploited in the numerical evaluation of the exact expression. The overall diffusion coefficient formulas for large μ are then reduced to a very simple form, and the remaining integral and sum over resonances are approximated analytically. These formulas are typically written for a modeled distribution of wave magnetic field intensity, but this may not be appropriate for highly oblique whistlers, which become quasi-electrostatic. Thus, the analysis is also presented in terms of wave electric field intensity. The final results depend strongly on the maximum μ (or μ∥) used to model the wave distribution, so realistic determination of these limiting values becomes paramount.

  7. Clustervision: Visual Supervision of Unsupervised Clustering.

    PubMed

    Kwon, Bum Chul; Eysenbach, Ben; Verma, Janu; Ng, Kenney; De Filippi, Christopher; Stewart, Walter F; Perer, Adam

    2018-01-01

    Clustering, the process of grouping together similar items into distinct partitions, is a common type of unsupervised machine learning that can be useful for summarizing and aggregating complex multi-dimensional data. However, data can be clustered in many ways, and there exist a large body of algorithms designed to reveal different patterns. While having access to a wide variety of algorithms is helpful, in practice, it is quite difficult for data scientists to choose and parameterize algorithms to get the clustering results relevant for their dataset and analytical tasks. To alleviate this problem, we built Clustervision, a visual analytics tool that helps ensure data scientists find the right clustering among the large amount of techniques and parameters available. Our system clusters data using a variety of clustering techniques and parameters and then ranks clustering results utilizing five quality metrics. In addition, users can guide the system to produce more relevant results by providing task-relevant constraints on the data. Our visual user interface allows users to find high quality clustering results, explore the clusters using several coordinated visualization techniques, and select the cluster result that best suits their task. We demonstrate this novel approach using a case study with a team of researchers in the medical domain and showcase that our system empowers users to choose an effective representation of their complex data.

  8. Reliable quantification of phthalates in environmental matrices (air, water, sludge, sediment and soil): a review.

    PubMed

    Net, Sopheak; Delmont, Anne; Sempéré, Richard; Paluselli, Andrea; Ouddane, Baghdad

    2015-05-15

    Because of their widespread application, phthalates or phthalic acid esters (PAEs) are ubiquitous in the environment. Their presence has attracted considerable attention due to their potential impacts on ecosystem functioning and on public health, so their quantification has become a necessity. Various extraction procedures as well as gas/liquid chromatography and mass spectrometry detection techniques are found as suitable for reliable detection of such compounds. However, PAEs are ubiquitous in the laboratory environment including ambient air, reagents, sampling equipment, and various analytical devices, that induces difficult analysis of real samples with a low PAE background. Therefore, accurate PAE analysis in environmental matrices is a challenging task. This paper reviews the extensive literature data on the techniques for PAE quantification in natural media. Sampling, sample extraction/pretreatment and detection for quantifying PAEs in different environmental matrices (air, water, sludge, sediment and soil) have been reviewed and compared. The concept of "green analytical chemistry" for PAE determination is also discussed. Moreover useful information about the material preparation and the procedures of quality control and quality assurance are presented to overcome the problem of sample contamination and these encountered due to matrix effects in order to avoid overestimating PAE concentrations in the environment. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Others' Anger Makes People Work Harder Not Smarter: The Effect of Observing Anger and Sarcasm on Creative and Analytic Thinking

    ERIC Educational Resources Information Center

    Miron-Spektor, Ella; Efrat-Treister, Dorit; Rafaeli, Anat; Schwarz-Cohen, Orit

    2011-01-01

    The authors examine whether and how observing anger influences thinking processes and problem-solving ability. In 3 studies, the authors show that participants who listened to an angry customer were more successful in solving analytic problems, but less successful in solving creative problems compared with participants who listened to an…

  10. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    NASA Astrophysics Data System (ADS)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  11. Urban Mathematics Teacher Retention

    ERIC Educational Resources Information Center

    Hamdan, Kamal

    2010-01-01

    Mathematics teachers are both more difficult to attract and more difficult to retain than social sciences teachers. This fact is not unique to the United States; it is reported as being a problem in Europe as well (Howson, 2002). In the United States, however, the problem is particularly preoccupying. Because of the chronic teacher shortages and…

  12. Baccalaureate Student Perceptions of Challenging Family Problems: Building Bridges to Acceptance

    ERIC Educational Resources Information Center

    Floyd, Melissa; Gruber, Kenneth J.

    2011-01-01

    This study explored the attitudes of 147 undergraduate social work majors to working with difficult families. Students indicated which problems (from a list of 42, including hot topics such as homosexuality, transgender issues, abortion, and substance abuse) they believed they would find most difficult to work with and provided information…

  13. An Analytical Planning Model to Estimate the Optimal Density of Charging Stations for Electric Vehicles

    PubMed Central

    Ahn, Yongjun; Yeo, Hwasoo

    2015-01-01

    The charging infrastructure location problem is becoming more significant due to the extensive adoption of electric vehicles. Efficient charging station planning can solve deeply rooted problems, such as driving-range anxiety and the stagnation of new electric vehicle consumers. In the initial stage of introducing electric vehicles, the allocation of charging stations is difficult to determine due to the uncertainty of candidate sites and unidentified charging demands, which are determined by diverse variables. This paper introduces the Estimating the Required Density of EV Charging (ERDEC) stations model, which is an analytical approach to estimating the optimal density of charging stations for certain urban areas, which are subsequently aggregated to city level planning. The optimal charging station’s density is derived to minimize the total cost. A numerical study is conducted to obtain the correlations among the various parameters in the proposed model, such as regional parameters, technological parameters and coefficient factors. To investigate the effect of technological advances, the corresponding changes in the optimal density and total cost are also examined by various combinations of technological parameters. Daejeon city in South Korea is selected for the case study to examine the applicability of the model to real-world problems. With real taxi trajectory data, the optimal density map of charging stations is generated. These results can provide the optimal number of chargers for driving without driving-range anxiety. In the initial planning phase of installing charging infrastructure, the proposed model can be applied to a relatively extensive area to encourage the usage of electric vehicles, especially areas that lack information, such as exact candidate sites for charging stations and other data related with electric vehicles. The methods and results of this paper can serve as a planning guideline to facilitate the extensive adoption of electric vehicles. PMID:26575845

  14. Working with the 'difficult' patient: the use of a contextual cognitive-analytic therapy based training in improving team function in a routine psychiatry service setting.

    PubMed

    Caruso, Rosangela; Biancosino, Bruno; Borghi, Cristiana; Marmai, Luciana; Kerr, Ian B; Grassi, Luigi

    2013-12-01

    The clinical management of 'difficult' patients is a major challenge which exposes mental health teams to an increased risk of frustration and stress and may lead to professional burnout. The aim of the present study was to investigate whether a cognitive-analytic therapy (CAT) based training undertaken by a mental health team working with 'difficult' patients reduced professional burnout symptoms, improved patients' service engagement and increased the levels of team-cohesion. Twelve mental health staff members from different professional and educational backgrounds took part in five 2-hour sessions providing a basic CAT training intervention, an integrative and relational model of psychotherapy for the treatment of borderline personality disorders. Participants were administered the Maslach Burnout Inventory (MBI), the Service Engagement Scale (SES) and the Group Environment Questionnaire (GEQ) before (T0) and after (T1) CAT training, and at 1-month follow-up (T2). A significant decrease were found, at T2, on the MBI Emotional Exhaustion scores, the SES Availability subscale, the GEQ Attraction to Group-Social and Group Integration-Social, while the MBI-Personal Accomplishment scores increased from baseline.The results of this study suggest that a CAT-based training can facilitate team cohesion and patient engagement with a service and reduce burnout levels among mental health team members dealing with 'difficult' patients.

  15. One-dimensional QCD in thimble regularization

    NASA Astrophysics Data System (ADS)

    Di Renzo, F.; Eruzzi, G.

    2018-01-01

    QCD in 0 +1 dimensions is numerically solved via thimble regularization. In the context of this toy model, a general formalism is presented for S U (N ) theories. The sign problem that the theory displays is a genuine one, stemming from a (quark) chemical potential. Three stationary points are present in the original (real) domain of integration, so that contributions from all the thimbles associated to them are to be taken into account: we show how semiclassical computations can provide hints on the regions of parameter space where this is absolutely crucial. Known analytical results for the chiral condensate and the Polyakov loop are correctly reproduced: this is in particular trivial at high values of the number of flavors Nf. In this regime we notice that the single thimble dominance scenario takes place (the dominant thimble is the one associated to the identity). At low values of Nf computations can be more difficult. It is important to stress that this is not at all a consequence of the original sign problem (not even via the residual phase). The latter is always under control, while accidental, delicate cancelations of contributions coming from different thimbles can be in place in (restricted) regions of the parameter space.

  16. A simple method for finding explicit analytic transition densities of diffusion processes with general diploid selection.

    PubMed

    Song, Yun S; Steinrücken, Matthias

    2012-03-01

    The transition density function of the Wright-Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright-Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation-selection balance.

  17. The metal enrichment of passive galaxies in cosmological simulations of galaxy formation

    NASA Astrophysics Data System (ADS)

    Okamoto, Takashi; Nagashima, Masahiro; Lacey, Cedric G.; Frenk, Carlos S.

    2017-02-01

    Massive early-type galaxies have higher metallicities and higher ratios of α elements to iron than their less massive counterparts. Reproducing these correlations has long been a problem for hierarchical galaxy formation theory, both in semi-analytic models and cosmological hydrodynamic simulations. We show that a simulation in which gas cooling in massive dark haloes is quenched by radio-mode active galactic nuclei (AGNs) feedback naturally reproduces the observed trend between α/Fe and the velocity dispersion of galaxies, σ. The quenching occurs earlier for more massive galaxies. Consequently, these galaxies complete their star formation before α/Fe is diluted by the contribution from Type Ia supernovae. For galaxies more massive than ˜1011 M⊙, whose α/Fe correlates positively with stellar mass, we find an inversely correlated mass-metallicity relation. This is a common problem in simulations in which star formation in massive galaxies is quenched either by quasar- or radio-mode AGN feedback. The early suppression of gas cooling in progenitors of massive galaxies prevents them from recapturing enriched gas ejected as winds. Simultaneously reproducing the [α/Fe]-σ relation and the mass-metallicity relation is, thus, difficult in the current framework of galaxy formation.

  18. Odor compounds in waste gas emissions from agricultural operations and food industries.

    PubMed

    Rappert, S; Müller, R

    2005-01-01

    In the last decades, large-scale agricultural operations and food industries have increased. These operations generate numerous types of odors. The reduction of land areas available for isolation of agricultural and food processing industrial operations from the public area and the increase in sensitivity and demand of the general public for a clean and pleasant environment have forced all of these industries to control odor emissions and toxic air pollutants. To develop environmentally sound, sustainable agricultural and food industrial operations, it is necessary to integrate research that focuses on modern analytical techniques and latest sensory technology of measurement and evaluation of odor and pollution, together with a fundamental knowledge of factors that are the basic units contributing to the production of odor and pollutants. Without a clear understanding of what odor is, how to measure it, and where it originates, it will be difficult to control the odor. The present paper reviews the available information regarding odor emissions from agricultural operations and food industries by giving an overview about odor problems, odor detection and quantification, and identifying the sources and the mechanisms that contribute to the odor emissions. Finally, ways of reducing or controlling the odor problem are discussed.

  19. A Simple Method for Finding Explicit Analytic Transition Densities of Diffusion Processes with General Diploid Selection

    PubMed Central

    Song, Yun S.; Steinrücken, Matthias

    2012-01-01

    The transition density function of the Wright–Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright–Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation–selection balance. PMID:22209899

  20. Differential invariants in nonclassical models of hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bublik, Vasily V.

    2017-10-01

    In this paper, differential invariants are used to construct solutions for equations of the dynamics of a viscous heat-conducting gas and the dynamics of a viscous incompressible fluid modified by nanopowder inoculators. To describe the dynamics of a viscous heat-conducting gas, we use the complete system of Navier—Stokes equations with allowance for heat fluxes. Mathematical description of the dynamics of liquid metals under high-energy external influences (laser radiation or plasma flow) includes, in addition to the Navier—Stokes system of an incompressible viscous fluid, also heat fluxes and processes of nonequilibrium crystallization of a deformable fluid. Differentially invariant solutions are a generalization of partially invariant solutions, and their active study for various models of continuous medium mechanics is just beginning. Differentially invariant solutions can also be considered as solutions with differential constraints; therefore, when developing them, the approaches and methods developed by the science schools of academicians N. N. Yanenko and A. F. Sidorov will be actively used. In the construction of partially invariant and differentially invariant solutions, there are overdetermined systems of differential equations that require a compatibility analysis. The algorithms for reducing such systems to involution in a finite number of steps are described by Cartan, Finikov, Kuranishi, and other authors. However, the difficultly foreseeable volume of intermediate calculations complicates their practical application. Therefore, the methods of computer algebra are actively used here, which largely helps in solving this difficult problem. It is proposed to use the constructed exact solutions as tests for formulas, algorithms and their software implementations when developing and creating numerical methods and computational program complexes. This combination of effective numerical methods, capable of solving a wide class of problems, with analytical methods makes it possible to make the results of mathematical modeling more accurate and reliable.

  1. Artificial evolution by viability rather than competition.

    PubMed

    Maesani, Andrea; Fernando, Pradeep Ruben; Floreano, Dario

    2014-01-01

    Evolutionary algorithms are widespread heuristic methods inspired by natural evolution to solve difficult problems for which analytical approaches are not suitable. In many domains experimenters are not only interested in discovering optimal solutions, but also in finding the largest number of different solutions satisfying minimal requirements. However, the formulation of an effective performance measure describing these requirements, also known as fitness function, represents a major challenge. The difficulty of combining and weighting multiple problem objectives and constraints of possibly varying nature and scale into a single fitness function often leads to unsatisfactory solutions. Furthermore, selective reproduction of the fittest solutions, which is inspired by competition-based selection in nature, leads to loss of diversity within the evolving population and premature convergence of the algorithm, hindering the discovery of many different solutions. Here we present an alternative abstraction of artificial evolution, which does not require the formulation of a composite fitness function. Inspired from viability theory in dynamical systems, natural evolution and ethology, the proposed method puts emphasis on the elimination of individuals that do not meet a set of changing criteria, which are defined on the problem objectives and constraints. Experimental results show that the proposed method maintains higher diversity in the evolving population and generates more unique solutions when compared to classical competition-based evolutionary algorithms. Our findings suggest that incorporating viability principles into evolutionary algorithms can significantly improve the applicability and effectiveness of evolutionary methods to numerous complex problems of science and engineering, ranging from protein structure prediction to aircraft wing design.

  2. Review of analytical models to stream depletion induced by pumping: Guide to model selection

    NASA Astrophysics Data System (ADS)

    Huang, Ching-Sheng; Yang, Tao; Yeh, Hund-Der

    2018-06-01

    Stream depletion due to groundwater extraction by wells may cause impact on aquatic ecosystem in streams, conflict over water rights, and contamination of water from irrigation wells near polluted streams. A variety of studies have been devoted to addressing the issue of stream depletion, but a fundamental framework for analytical modeling developed from aquifer viewpoint has not yet been found. This review shows key differences in existing models regarding the stream depletion problem and provides some guidelines for choosing a proper analytical model in solving the problem of concern. We introduce commonly used models composed of flow equations, boundary conditions, well representations and stream treatments for confined, unconfined, and leaky aquifers. They are briefly evaluated and classified according to six categories of aquifer type, flow dimension, aquifer domain, stream representation, stream channel geometry, and well type. Finally, we recommend promising analytical approaches that can solve stream depletion problem in reality with aquifer heterogeneity and irregular geometry of stream channel. Several unsolved stream depletion problems are also recommended.

  3. The limited relevance of analytical ethics to the problems of bioethics.

    PubMed

    Holmes, R L

    1990-04-01

    Philosophical ethics comprises metaethics, normative ethics and applied ethics. These have characteristically received analytic treatment by twentieth-century Anglo-American philosophy. But there has been disagreement over their interrelationship to one another and the relationship of analytical ethics to substantive morality--the making of moral judgments. I contend that the expertise philosophers have in either theoretical or applied ethics does not equip them to make sounder moral judgments on the problems of bioethics than nonphilosophers. One cannot "apply" theories like Kantianism or consequentialism to get solutions to practical moral problems unless one knows which theory is correct, and that is a metaethical question over which there is no consensus. On the other hand, to presume to be able to reach solutions through neutral analysis of problems is unavoidably to beg controversial theoretical issues in the process. Thus, while analytical ethics can play an important clarificatory role in bioethics, it can neither provide, nor substitute for, moral wisdom.

  4. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    NASA Astrophysics Data System (ADS)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  5. Analytical progress in the theory of vesicles under linear flow

    NASA Astrophysics Data System (ADS)

    Farutin, Alexander; Biben, Thierry; Misbah, Chaouqi

    2010-06-01

    Vesicles are becoming a quite popular model for the study of red blood cells. This is a free boundary problem which is rather difficult to handle theoretically. Quantitative computational approaches constitute also a challenge. In addition, with numerical studies, it is not easy to scan within a reasonable time the whole parameter space. Therefore, having quantitative analytical results is an essential advance that provides deeper understanding of observed features and can be used to accompany and possibly guide further numerical development. In this paper, shape evolution equations for a vesicle in a shear flow are derived analytically with precision being cubic (which is quadratic in previous theories) with regard to the deformation of the vesicle relative to a spherical shape. The phase diagram distinguishing regions of parameters where different types of motion (tank treading, tumbling, and vacillating breathing) are manifested is presented. This theory reveals unsuspected features: including higher order terms and harmonics (even if they are not directly excited by the shear flow) is necessary, whatever the shape is close to a sphere. Not only does this theory cure a quite large quantitative discrepancy between previous theories and recent experiments and numerical studies, but also it reveals a phenomenon: the VB mode band in parameter space, which is believed to saturate after a moderate shear rate, exhibits a striking widening beyond a critical shear rate. The widening results from excitation of fourth-order harmonic. The obtained phase diagram is in a remarkably good agreement with recent three-dimensional numerical simulations based on the boundary integral formulation. Comparison of our results with experiments is systematically made.

  6. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.

  7. Reaction Workup Planning: A Structured Flowchart Approach, Exemplified in Difficult Aqueous Workup of Hydrophilic Products

    ERIC Educational Resources Information Center

    Hill, George B.; Sweeney, Joseph B.

    2015-01-01

    Reaction workup can be a complex problem for those facing novel synthesis of difficult compounds for the first time. Workup problem solving by systematic thinking should be inculcated as mid-graduate-level is reached. A structured approach is proposed, building decision tree flowcharts to analyze challenges, and an exemplar flowchart is presented…

  8. Distraction during learning with hypermedia: difficult tasks help to keep task goals on track

    PubMed Central

    Scheiter, Katharina; Gerjets, Peter; Heise, Elke

    2014-01-01

    In educational hypermedia environments, students are often confronted with potential sources of distraction arising from additional information that, albeit interesting, is unrelated to their current task goal. The paper investigates the conditions under which distraction occurs and hampers performance. Based on theories of volitional action control it was hypothesized that interesting information, especially if related to a pending goal, would interfere with task performance only when working on easy, but not on difficult tasks. In Experiment 1, 66 students learned about probability theory using worked examples and solved corresponding test problems, whose task difficulty was manipulated. As a second factor, the presence of interesting information unrelated to the primary task was varied. Results showed that students solved more easy than difficult probability problems correctly. However, the presence of interesting, but task-irrelevant information did not interfere with performance. In Experiment 2, 68 students again engaged in example-based learning and problem solving in the presence of task-irrelevant information. Problem-solving difficulty was varied as a first factor. Additionally, the presence of a pending goal related to the task-irrelevant information was manipulated. As expected, problem-solving performance declined when a pending goal was present during working on easy problems, whereas no interference was observed for difficult problems. Moreover, the presence of a pending goal reduced the time on task-relevant information and increased the time on task-irrelevant information while working on easy tasks. However, as revealed by mediation analyses these changes in overt information processing behavior did not explain the decline in problem-solving performance. As an alternative explanation it is suggested that goal conflicts resulting from pending goals claim cognitive resources, which are then no longer available for learning and problem solving. PMID:24723907

  9. Hybrid neural network and fuzzy logic approaches for rendezvous and capture in space

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Castellano, Timothy

    1991-01-01

    The nonlinear behavior of many practical systems and unavailability of quantitative data regarding the input-output relations makes the analytical modeling of these systems very difficult. On the other hand, approximate reasoning-based controllers which do not require analytical models have demonstrated a number of successful applications such as the subway system in the city of Sendai. These applications have mainly concentrated on emulating the performance of a skilled human operator in the form of linguistic rules. However, the process of learning and tuning the control rules to achieve the desired performance remains a difficult task. Fuzzy Logic Control is based on fuzzy set theory. A fuzzy set is an extension of a crisp set. Crisp sets only allow full membership or no membership at all, whereas fuzzy sets allow partial membership. In other words, an element may partially belong to a set.

  10. The expected results method for data verification

    NASA Astrophysics Data System (ADS)

    Monday, Paul

    2016-05-01

    The credibility of United States Army analytical experiments using distributed simulation depends on the quality of the simulation, the pedigree of the input data, and the appropriateness of the simulation system to the problem. The second of these factors is best met by using classified performance data from the Army Materiel Systems Analysis Activity (AMSAA) for essential battlefield behaviors, like sensors, weapon fire, and damage assessment. Until recently, using classified data has been a time-consuming and expensive endeavor: it requires significant technical expertise to load, and it is difficult to verify that it works correctly. Fortunately, new capabilities, tools, and processes are available that greatly reduce these costs. This paper will discuss these developments, a new method to verify that all of the components are configured and operate properly, and the application to recent Army Capabilities Integration Center (ARCIC) experiments. Recent developments have focused improving the process to load the data. OneSAF has redesigned their input data file formats and structures so that they correspond exactly with the Standard File Format (SFF) defined by AMSAA, ARCIC developed a library of supporting configurations that correlate directly to the AMSAA nomenclature, and the Entity Validation Tool was designed to quickly execute the essential models with a test-jig approach to identify problems with the loaded data. The missing part of the process is provided by the new Expected Results Method. Instead of the usual subjective assessment of quality, e.g., "It looks about right to me", this new approach compares the performance of a combat model with authoritative expectations to quickly verify that the model, data, and simulation are all working correctly. Integrated together, these developments now make it possible to use AMSAA classified performance data with minimal time and maximum assurance that the experiment's analytical results will be of the highest quality possible.

  11. Assessment regarding the use of the computer aided analytical models in the calculus of the general strength of a ship hull

    NASA Astrophysics Data System (ADS)

    Hreniuc, V.; Hreniuc, A.; Pescaru, A.

    2017-08-01

    Solving a general strength problem of a ship hull may be done using analytical approaches which are useful to deduce the buoyancy forces distribution, the weighting forces distribution along the hull and the geometrical characteristics of the sections. These data are used to draw the free body diagrams and to compute the stresses. The general strength problems require a large amount of calculi, therefore it is interesting how a computer may be used to solve such problems. Using computer programming an engineer may conceive software instruments based on analytical approaches. However, before developing the computer code the research topic must be thoroughly analysed, in this way being reached a meta-level of understanding of the problem. The following stage is to conceive an appropriate development strategy of the original software instruments useful for the rapid development of computer aided analytical models. The geometrical characteristics of the sections may be computed using a bool algebra that operates with ‘simple’ geometrical shapes. By ‘simple’ we mean that for the according shapes we have direct calculus relations. In the set of ‘simple’ shapes we also have geometrical entities bounded by curves approximated as spline functions or as polygons. To conclude, computer programming offers the necessary support to solve general strength ship hull problems using analytical methods.

  12. Science teacher's perception about science learning experiences as a foundation for teacher training program

    NASA Astrophysics Data System (ADS)

    Tapilouw, Marisa Christina; Firman, Harry; Redjeki, Sri; Chandra, Didi Teguh

    2017-05-01

    Teacher training is one form of continuous professional development. Before organizing teacher training (material, time frame), a survey about teacher's need has to be done. Science teacher's perception about science learning in the classroom, the most difficult learning model, difficulties of lesson plan would be a good input for teacher training program. This survey conducted in June 2016. About 23 science teacher filled in the questionnaire. The core of questions are training participation, the most difficult science subject matter, the most difficult learning model, the difficulties of making lesson plan, knowledge of integrated science and problem based learning. Mostly, experienced teacher participated training once a year. Science training is very important to enhance professional competency and to improve the way of teaching. The difficulties of subject matter depend on teacher's education background. The physics subject matter in class VIII and IX are difficult to teach for most respondent because of many formulas and abstract. Respondents found difficulties in making lesson plan, in term of choosing the right learning model for some subject matter. Based on the result, inquiry, cooperative, practice are frequently used in science class. Integrated science is understood as a mix between Biology, Physics and Chemistry concepts. On the other hand, respondents argue that problem based learning was difficult especially in finding contextual problem. All the questionnaire result can be used as an input for teacher training program in order to enhanced teacher's competency. Difficult concepts, integrated science, teaching plan, problem based learning can be shared in teacher training.

  13. Analytical study of sandwich structures using Euler-Bernoulli beam equation

    NASA Astrophysics Data System (ADS)

    Xue, Hui; Khawaja, H.

    2017-01-01

    This paper presents an analytical study of sandwich structures. In this study, the Euler-Bernoulli beam equation is solved analytically for a four-point bending problem. Appropriate initial and boundary conditions are specified to enclose the problem. In addition, the balance coefficient is calculated and the Rule of Mixtures is applied. The focus of this study is to determine the effective material properties and geometric features such as the moment of inertia of a sandwich beam. The effective parameters help in the development of a generic analytical correlation for complex sandwich structures from the perspective of four-point bending calculations. The main outcomes of these analytical calculations are the lateral displacements and longitudinal stresses for each particular material in the sandwich structure.

  14. Approximate analytical description of the elastic strain field due to an inclusion in a continuous medium with cubic anisotropy

    NASA Astrophysics Data System (ADS)

    Nenashev, A. V.; Koshkarev, A. A.; Dvurechenskii, A. V.

    2018-03-01

    We suggest an approach to the analytical calculation of the strain distribution due to an inclusion in elastically anisotropic media for the case of cubic anisotropy. The idea consists in the approximate reduction of the anisotropic problem to a (simpler) isotropic problem. This gives, for typical semiconductors, an improvement in accuracy by an order of magnitude, compared to the isotropic approximation. Our method allows using, in the case of elastically anisotropic media, analytical solutions obtained for isotropic media only, such as analytical formulas for the strain due to polyhedral inclusions. The present work substantially extends the applicability of analytical results, making them more suitable for describing real systems, such as epitaxial quantum dots.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Heinrich, R.R.; Jensen, K.J.

    The Analytical Chemistry Laboratory is a full-cost-recovery service center, with the primary mission of providing a broad range of technical support services to the scientific and engineering programs at ANL. In addition, ACL conducts a research program in analytical chemistry, works on instrumental and methods development, and provides analytical services for governmental, educational, and industrial organizations. The ACL handles a wide range of analytical problems, from routine standard analyses to unique problems that require significant development of methods and techniques. The purpose of this report is to summarize the technical and administrative activities of the Analytical Chemistry Laboratory (ACL) atmore » Argonne National Laboratory (ANL) for Fiscal Year 1985 (October 1984 through September 1985). This is the second annual report for the ACL. 4 figs., 1 tab.« less

  16. Analytical Chemistry Laboratory. Progress report for FY 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Boparai, A.S.; Bowers, D.L.

    The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 1996. This annual report is the thirteenth for the ACL. It describes effort on continuing and new projects and contributions of the ACL staff to various programs at ANL. The ACL operates in the ANL system as a full-cost-recovery service center, but has a mission that includes a complementary research and development component: The Analytical Chemistry Laboratory will provide high-quality, cost-effective chemical analysis and related technical support to solve research problems of our clients --more » Argonne National Laboratory, the Department of Energy, and others -- and will conduct world-class research and development in analytical chemistry and its applications. Because of the diversity of research and development work at ANL, the ACL handles a wide range of analytical chemistry problems. Some routine or standard analyses are done, but the ACL usually works with commercial laboratories if our clients require high-volume, production-type analyses. It is common for ANL programs to generate unique problems that require significant development of methods and adaption of techniques to obtain useful analytical data. Thus, much of the support work done by the ACL is very similar to our applied analytical chemistry research.« less

  17. AN ACCURATE AND EFFICIENT ALGORITHM FOR NUMERICAL SIMULATION OF CONDUCTION-TYPE PROBLEMS. (R824801)

    EPA Science Inventory

    Abstract

    A modification of the finite analytic numerical method for conduction-type (diffusion) problems is presented. The finite analytic discretization scheme is derived by means of the Fourier series expansion for the most general case of nonuniform grid and variabl...

  18. Using the Competent Small Group Communicator Instrument to Assess Group Performance in the Classroom.

    ERIC Educational Resources Information Center

    Albert, Lawrence S.

    If being a competent small group problem solver is difficult, it is even more difficult to impart those competencies to others. Unlike athletic coaches who are near their players during the real game, teachers of small group communication are not typically present for on-the-spot coaching when their students are doing their problem solving. That…

  19. Examining problem solving in physics-intensive Ph.D. research

    NASA Astrophysics Data System (ADS)

    Leak, Anne E.; Rothwell, Susan L.; Olivera, Javier; Zwickl, Benjamin; Vosburg, Jarrett; Martin, Kelly Norris

    2017-12-01

    Problem-solving strategies learned by physics undergraduates should prepare them for real-world contexts as they transition from students to professionals. Yet, graduate students in physics-intensive research face problems that go beyond problem sets they experienced as undergraduates and are solved by different strategies than are typically learned in undergraduate coursework. This paper expands the notion of problem solving by characterizing the breadth of problems and problem-solving processes carried out by graduate students in physics-intensive research. We conducted semi-structured interviews with ten graduate students to determine the routine, difficult, and important problems they engage in and problem-solving strategies they found useful in their research. A qualitative typological analysis resulted in the creation of a three-dimensional framework: context, activity, and feature (that made the problem challenging). Problem contexts extended beyond theory and mathematics to include interactions with lab equipment, data, software, and people. Important and difficult contexts blended social and technical skills. Routine problem activities were typically well defined (e.g., troubleshooting), while difficult and important ones were more open ended and had multiple solution paths (e.g., evaluating options). In addition to broadening our understanding of problems faced by graduate students, our findings explore problem-solving strategies (e.g., breaking down problems, evaluating options, using test cases or approximations) and characteristics of successful problem solvers (e.g., initiative, persistence, and motivation). Our research provides evidence of the influence that problems students are exposed to have on the strategies they use and learn. Using this evidence, we have developed a preliminary framework for exploring problems from the solver's perspective. This framework will be examined and refined in future work. Understanding problems graduate students face and the strategies they use has implications for improving how we approach problem solving in undergraduate physics and physics education research.

  20. Exact analytical solution of a classical Josephson tunnel junction problem

    NASA Astrophysics Data System (ADS)

    Kuplevakhsky, S. V.; Glukhov, A. M.

    2010-10-01

    We give an exact and complete analytical solution of the classical problem of a Josephson tunnel junction of arbitrary length W ɛ(0,∞) in the presence of external magnetic fields and transport currents. Contrary to a wide-spread belief, the exact analytical solution unambiguously proves that there is no qualitative difference between so-called "small" (W≪1) and "large" junctions (W≫1). Another unexpected physical implication of the exact analytical solution is the existence (in the current-carrying state) of unquantized Josephson vortices carrying fractional flux and located near one of the edges of the junction. We also refine the mathematical definition of critical transport current.

  1. Construction Method of Analytical Solutions to the Mathematical Physics Boundary Problems for Non-Canonical Domains

    NASA Astrophysics Data System (ADS)

    Mobarakeh, Pouyan Shakeri; Grinchenko, Victor T.

    2015-06-01

    The majority of practical cases of acoustics problems requires solving the boundary problems in non-canonical domains. Therefore construction of analytical solutions of mathematical physics boundary problems for non-canonical domains is both lucrative from the academic viewpoint, and very instrumental for elaboration of efficient algorithms of quantitative estimation of the field characteristics under study. One of the main solving ideologies for such problems is based on the superposition method that allows one to analyze a wide class of specific problems with domains which can be constructed as the union of canonically-shaped subdomains. It is also assumed that an analytical solution (or quasi-solution) can be constructed for each subdomain in one form or another. However, this case implies some difficulties in the construction of calculation algorithms, insofar as the boundary conditions are incompletely defined in the intervals, where the functions appearing in the general solution are orthogonal to each other. We discuss several typical examples of problems with such difficulties, we study their nature and identify the optimal methods to overcome them.

  2. Hill Problem Analytical Theory to the Order Four. Application to the Computation of Frozen Orbits around Planetary Satellites

    NASA Technical Reports Server (NTRS)

    Lara, Martin; Palacian, Jesus F.

    2007-01-01

    Frozen orbits of the Hill problem are determined in the double averaged problem, where short and long period terms are removed by means of Lie transforms. The computation of initial conditions of corresponding quasi periodic solutions in the non-averaged problem is straightforward for the perturbation method used provides the explicit equations of the transformation that connects the averaged and non-averaged models. A fourth order analytical theory reveals necessary for the accurate computation of quasi periodic, frozen orbits.

  3. An analytic performance model of disk arrays and its application

    NASA Technical Reports Server (NTRS)

    Lee, Edward K.; Katz, Randy H.

    1991-01-01

    As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.

  4. Analytical investigation of a three-dimensional FRP-retrofitted reinforced concrete structure's behaviour under earthquake load effect in ANSYS program

    NASA Astrophysics Data System (ADS)

    Altun, F.; Birdal, F.

    2012-12-01

    In this study, a 1:3 scaled, three-storey, FRP (Fiber Reinforced Polymer) retrofitted reinforced concrete model structure whose behaviour and crack development were identified experimentally in the laboratory was investigated analytically. Determination of structural behaviour under earthquake load is only possible in a laboratory environment with a specific scale, as carrying out structural experiments is difficult due to the evaluation of increased parameter numbers and because it requires an expensive laboratory setup. In an analytical study, structure was modelled using ANSYS Finite Element Package Program (2007), and its behaviour and crack development were revealed. When experimental difficulties are taken into consideration, analytical investigation of structure behaviour is more economic and much faster. At the end of the study, experimental results of structural behaviour and crack development were compared with analytical data. It was concluded that in a model structure retrofitted with FRP, the behaviour and cracking model can be determined without testing by determining the reasons for the points where analytical results are not converged with experimental data. Better understanding of structural behaviour is analytically enabled with the study.

  5. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  6. Random Forest as a Predictive Analytics Alternative to Regression in Institutional Research

    ERIC Educational Resources Information Center

    He, Lingjun; Levine, Richard A.; Fan, Juanjuan; Beemer, Joshua; Stronach, Jeanne

    2018-01-01

    In institutional research, modern data mining approaches are seldom considered to address predictive analytics problems. The goal of this paper is to highlight the advantages of tree-based machine learning algorithms over classic (logistic) regression methods for data-informed decision making in higher education problems, and stress the success of…

  7. Intimacy Is a Transdiagnostic Problem for Cognitive Behavior Therapy: Functional Analytical Psychotherapy Is a Solution

    ERIC Educational Resources Information Center

    Wetterneck, Chad T.; Hart, John M.

    2012-01-01

    Problems with intimacy and interpersonal issues are exhibited across most psychiatric disorders. However, most of the targets in Cognitive Behavioral Therapy are primarily intrapersonal in nature, with few directly involved in interpersonal functioning and effective intimacy. Functional Analytic Psychotherapy (FAP) provides a behavioral basis for…

  8. Big Data Analytics with Datalog Queries on Spark.

    PubMed

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2016-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.

  9. Big Data Analytics with Datalog Queries on Spark

    PubMed Central

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2017-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics. PMID:28626296

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    Here, the one-dimensional water faucet problem is one of the classical benchmark problems originally proposed by Ransom to study the two-fluid two-phase flow model. With certain simplifications, such as massless gas phase and no wall and interfacial frictions, analytical solutions had been previously obtained for the transient liquid velocity and void fraction distribution. The water faucet problem and its analytical solutions have been widely used for the purposes of code assessment, benchmark and numerical verifications. In our previous study, the Ransom’s solutions were used for the mesh convergence study of a high-resolution spatial discretization scheme. It was found that, atmore » the steady state, an anticipated second-order spatial accuracy could not be achieved, when compared to the existing Ransom’s analytical solutions. A further investigation showed that the existing analytical solutions do not actually satisfy the commonly used two-fluid single-pressure two-phase flow equations. In this work, we present a new set of analytical solutions of the water faucet problem at the steady state, considering the gas phase density’s effect on pressure distribution. This new set of analytical solutions are used for mesh convergence studies, from which anticipated second-order of accuracy is achieved for the 2nd order spatial discretization scheme. In addition, extended Ransom’s transient solutions for the gas phase velocity and pressure are derived, with the assumption of decoupled liquid and gas pressures. Numerical verifications on the extended Ransom’s solutions are also presented.« less

  11. Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction

    NASA Technical Reports Server (NTRS)

    Lee, Seongkyu; Brentner, Kenneth S.; Farassat, F.; Morris, Philip J.

    2008-01-01

    Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. The pressure gradient can be used to solve the boundary condition for scattering problems and it is a key aspect to solve acoustic scattering problems. The first formulation is derived from the gradient of the Ffowcs Williams-Hawkings (FW-H) equation. This formulation has a form involving the observer time differentiation outside the integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. This formulation avoids the numerical time differentiation with respect to the observer time, which is computationally more efficient. The acoustic pressure gradient predicted by these new formulations is validated through comparison with available exact solutions for a stationary and moving monopole sources. The agreement between the predictions and exact solutions is excellent. The formulations are applied to the rotor noise problems for two model rotors. A purely numerical approach is compared with the analytical formulations. The agreement between the analytical formulations and the numerical method is excellent for both stationary and moving observer cases.

  12. Numerical Polynomial Homotopy Continuation Method and String Vacua

    DOE PAGES

    Mehta, Dhagash

    2011-01-01

    Finding vmore » acua for the four-dimensional effective theories for supergravity which descend from flux compactifications and analyzing them according to their stability is one of the central problems in string phenomenology. Except for some simple toy models, it is, however, difficult to find all the vacua analytically. Recently developed algorithmic methods based on symbolic computer algebra can be of great help in the more realistic models. However, they suffer from serious algorithmic complexities and are limited to small system sizes. In this paper, we review a numerical method called the numerical polynomial homotopy continuation (NPHC) method, first used in the areas of lattice field theories, which by construction finds all of the vacua of a given potential that is known to have only isolated solutions. The NPHC method is known to suffer from no major algorithmic complexities and is embarrassingly parallelizable , and hence its applicability goes way beyond the existing symbolic methods. We first solve a simple toy model as a warm-up example to demonstrate the NPHC method at work. We then show that all the vacua of a more complicated model of a compactified M theory model, which has an S U ( 3 ) structure, can be obtained by using a desktop machine in just about an hour, a feat which was reported to be prohibitively difficult by the existing symbolic methods. Finally, we compare the various technicalities between the two methods.« less

  13. DROMO formulation for planar motions: solution to the Tsien problem

    NASA Astrophysics Data System (ADS)

    Urrutxua, Hodei; Morante, David; Sanjurjo-Rivo, Manuel; Peláez, Jesús

    2015-06-01

    The two-body problem subject to a constant radial thrust is analyzed as a planar motion. The description of the problem is performed in terms of three perturbation methods: DROMO and two others due to Deprit. All of them rely on Hansen's ideal frame concept. An explicit, analytic, closed-form solution is obtained for this problem when the initial orbit is circular (Tsien problem), based on the DROMO special perturbation method, and expressed in terms of elliptic integral functions. The analytical solution to the Tsien problem is later used as a reference to test the numerical performance of various orbit propagation methods, including DROMO and Deprit methods, as well as Cowell and Kustaanheimo-Stiefel methods.

  14. Exact solution for an optimal impermeable parachute problem

    NASA Astrophysics Data System (ADS)

    Lupu, Mircea; Scheiber, Ernest

    2002-10-01

    In the paper there are solved direct and inverse boundary problems and analytical solutions are obtained for optimization problems in the case of some nonlinear integral operators. It is modeled the plane potential flow of an inviscid, incompressible and nonlimited fluid jet, witch encounters a symmetrical, curvilinear obstacle--the deflector of maximal drag. There are derived integral singular equations, for direct and inverse problems and the movement in the auxiliary canonical half-plane is obtained. Next, the optimization problem is solved in an analytical manner. The design of the optimal airfoil is performed and finally, numerical computations concerning the drag coefficient and other geometrical and aerodynamical parameters are carried out. This model corresponds to the Helmholtz impermeable parachute problem.

  15. TEMPERAMENT, FAMILY ENVIRONMENT, AND BEHAVIOR PROBLEMS IN CHILDREN WITH NEW-ONSET SEIZURES

    PubMed Central

    Baum, Katherine T.; Byars, Anna W.; deGrauw, Ton J.; Johnson, Cynthia S.; Perkins, Susan M.; Dunn, David W.; Bates, John E.; Austin, Joan K.

    2007-01-01

    Children with epilepsy, even those with new-onset seizures, exhibit relatively high rates of behavior problems. The purpose of this study was to explore the relationships among early temperament, family adaptive resources, and behavior problems in children with new-onset seizures. Our major goal was to test whether family adaptive resources moderated the relationship between early temperament dimensions and current behavior problems in 287 children with new-onset seizures. Two of the three temperament dimensions (difficultness and resistance to control) were positively correlated with total, internalizing, and externalizing behavior problems (all p < 0.0001). The third temperament dimension, unadaptability, was positively correlated with total and internalizing problems (p < 0.01). Family adaptive resources moderated the relationships between temperament and internalizing and externalizing behavior problems at school. Children with a difficult early temperament who live in a family environment with low family mastery are at the greatest risk for behavior problems. PMID:17267291

  16. An analytically iterative method for solving problems of cosmic-ray modulation

    NASA Astrophysics Data System (ADS)

    Kolesnyk, Yuriy L.; Bobik, Pavol; Shakhov, Boris A.; Putis, Marian

    2017-09-01

    The development of an analytically iterative method for solving steady-state as well as unsteady-state problems of cosmic-ray (CR) modulation is proposed. Iterations for obtaining the solutions are constructed for the spherically symmetric form of the CR propagation equation. The main solution of the considered problem consists of the zero-order solution that is obtained during the initial iteration and amendments that may be obtained by subsequent iterations. The finding of the zero-order solution is based on the CR isotropy during propagation in the space, whereas the anisotropy is taken into account when finding the next amendments. To begin with, the method is applied to solve the problem of CR modulation where the diffusion coefficient κ and the solar wind speed u are constants with an Local Interstellar Spectra (LIS) spectrum. The solution obtained with two iterations was compared with an analytical solution and with numerical solutions. Finally, solutions that have only one iteration for two problems of CR modulation with u = constant and the same form of LIS spectrum were obtained and tested against numerical solutions. For the first problem, κ is proportional to the momentum of the particle p, so it has the form κ = k0η, where η =p/m_0c. For the second problem, the diffusion coefficient is given in the form κ = k0βη, where β =v/c is the particle speed relative to the speed of light. There was a good matching of the obtained solutions with the numerical solutions as well as with the analytical solution for the problem where κ = constant.

  17. The NIH analytical methods and reference materials program for dietary supplements.

    PubMed

    Betz, Joseph M; Fisher, Kenneth D; Saldanha, Leila G; Coates, Paul M

    2007-09-01

    Quality of botanical products is a great uncertainty that consumers, clinicians, regulators, and researchers face. Definitions of quality abound, and include specifications for sanitation, adventitious agents (pesticides, metals, weeds), and content of natural chemicals. Because dietary supplements (DS) are often complex mixtures, they pose analytical challenges and method validation may be difficult. In response to product quality concerns and the need for validated and publicly available methods for DS analysis, the US Congress directed the Office of Dietary Supplements (ODS) at the National Institutes of Health (NIH) to accelerate an ongoing methods validation process, and the Dietary Supplements Methods and Reference Materials Program was created. The program was constructed from stakeholder input and incorporates several federal procurement and granting mechanisms in a coordinated and interlocking framework. The framework facilitates validation of analytical methods, analytical standards, and reference materials.

  18. Numerical and analytical approaches to an advection-diffusion problem at small Reynolds number and large Péclet number

    NASA Astrophysics Data System (ADS)

    Fuller, Nathaniel J.; Licata, Nicholas A.

    2018-05-01

    Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.

  19. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  20. Compensation of matrix effects in gas chromatography-mass spectrometry analysis of pesticides using a combination of matrix matching and multiple isotopically labeled internal standards.

    PubMed

    Tsuchiyama, Tomoyuki; Katsuhara, Miki; Nakajima, Masahiro

    2017-11-17

    In the multi-residue analysis of pesticides using GC-MS, the quantitative results are adversely affected by a phenomenon known as the matrix effect. Although the use of matrix-matched standards is considered to be one of the most practical solutions to this problem, complete removal of the matrix effect is difficult in complex food matrices owing to their inconsistency. As a result, residual matrix effects can introduce analytical errors. To compensate for residual matrix effects, we have developed a novel method that employs multiple isotopically labeled internal standards (ILIS). The matrix effects of ILIS and pesticides were evaluated in spiked matrix extracts of various agricultural commodities, and the obtained data were subjected to simple statistical analysis. Based on the similarities between the patterns of variation in the analytical response, a total of 32 isotopically labeled compounds were assigned to 338 pesticides as internal standards. It was found that by utilizing multiple ILIS, residual matrix effects could be effectively compensated. The developed method exhibited superior quantitative performance compared with the common single-internal-standard method. The proposed method is more feasible for regulatory purposes than that using only predetermined correction factors and is considered to be promising for practical applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Two-dimensional wavelet transform feature extraction for porous silicon chemical sensors.

    PubMed

    Murguía, José S; Vergara, Alexander; Vargas-Olmos, Cecilia; Wong, Travis J; Fonollosa, Jordi; Huerta, Ramón

    2013-06-27

    Designing reliable, fast responding, highly sensitive, and low-power consuming chemo-sensory systems has long been a major goal in chemo-sensing. This goal, however, presents a difficult challenge because having a set of chemo-sensory detectors exhibiting all these aforementioned ideal conditions are still largely un-realizable to-date. This paper presents a unique perspective on capturing more in-depth insights into the physicochemical interactions of two distinct, selectively chemically modified porous silicon (pSi) film-based optical gas sensors by implementing an innovative, based on signal processing methodology, namely the two-dimensional discrete wavelet transform. Specifically, the method consists of using the two-dimensional discrete wavelet transform as a feature extraction method to capture the non-stationary behavior from the bi-dimensional pSi rugate sensor response. Utilizing a comprehensive set of measurements collected from each of the aforementioned optically based chemical sensors, we evaluate the significance of our approach on a complex, six-dimensional chemical analyte discrimination/quantification task problem. Due to the bi-dimensional aspects naturally governing the optical sensor response to chemical analytes, our findings provide evidence that the proposed feature extractor strategy may be a valuable tool to deepen our understanding of the performance of optically based chemical sensors as well as an important step toward attaining their implementation in more realistic chemo-sensing applications. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Impulsive-Analytic Disposition in Mathematical Problem Solving: A Survey and a Mathematics Test

    ERIC Educational Resources Information Center

    Lim, Kien H.; Wagler, Amy

    2012-01-01

    The Likelihood-to-Act (LtA) survey and a mathematics test were used in this study to assess students' impulsive-analytic disposition in the context of mathematical problem solving. The results obtained from these two instruments were compared to those obtained using two widely-used scales: Need for Cognition (NFC) and Barratt Impulsivity Scale…

  3. Student Learning and Evaluation in Analytical Chemistry Using a Problem-Oriented Approach and Portfolio Assessment

    ERIC Educational Resources Information Center

    Boyce, Mary C.; Singh, Kuki

    2008-01-01

    This paper describes a student-focused activity that promotes effective learning in analytical chemistry. Providing an environment where students were responsible for their own learning allowed them to participate at all levels from designing the problem to be addressed, planning the laboratory work to support their learning, to providing evidence…

  4. Similarity solution of the Boussinesq equation

    NASA Astrophysics Data System (ADS)

    Lockington, D. A.; Parlange, J.-Y.; Parlange, M. B.; Selker, J.

    Similarity transforms of the Boussinesq equation in a semi-infinite medium are available when the boundary conditions are a power of time. The Boussinesq equation is reduced from a partial differential equation to a boundary-value problem. Chen et al. [Trans Porous Media 1995;18:15-36] use a hodograph method to derive an integral equation formulation of the new differential equation which they solve by numerical iteration. In the present paper, the convergence of their scheme is improved such that numerical iteration can be avoided for all practical purposes. However, a simpler analytical approach is also presented which is based on Shampine's transformation of the boundary value problem to an initial value problem. This analytical approximation is remarkably simple and yet more accurate than the analytical hodograph approximations.

  5. Dealing with Institutional Racism on Campus: Initiating Difficult Dialogues and Social Justice Advocacy Interventions

    ERIC Educational Resources Information Center

    D'Andrea, Michael; Daniels, Judy

    2007-01-01

    The authors describe social justice advocacy interventions to initiate difficult discussions at the university where they are employed. They emphasize the need to foster difficult dialogues about the problem of institutional racism among students, faculty members, and administrators where they work. The Privileged Identity Exploration (PIE) model…

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, D.W.; Boparai, A.S.; Bowers, D.L.

    This report summarizes the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 2000 (October 1999 through September 2000). This annual progress report, which is the seventeenth in this series for the ACL, describes effort on continuing projects, work on new projects, and contributions of the ACL staff to various programs at ANL. The ACL operates within the ANL system as a full-cost-recovery service center, but it has a mission that includes a complementary research and development component: The Analytical Chemistry Laboratory will provide high-quality, cost-effective chemical analysis and related technical support tomore » solve research problems of our clients--Argonne National Laboratory, the Department of Energy, and others--and will conduct world-class research and development in analytical chemistry and its applications. The ACL handles a wide range of analytical problems that reflects the diversity of research and development (R&D) work at ANL. Some routine or standard analyses are done, but the ACL operates more typically in a problem-solving mode in which development of methods is required or adaptation of techniques is needed to obtain useful analytical data. The ACL works with clients and commercial laboratories if a large number of routine analyses are required. Much of the support work done by the ACL is very similar to applied analytical chemistry research work.« less

  7. Quaternion Regularization of the Equations of the Perturbed Spatial Restricted Three-Body Problem: I

    NASA Astrophysics Data System (ADS)

    Chelnokov, Yu. N.

    2017-11-01

    We develop a quaternion method for regularizing the differential equations of the perturbed spatial restricted three-body problem by using the Kustaanheimo-Stiefel variables, which is methodologically closely related to the quaternion method for regularizing the differential equations of perturbed spatial two-body problem, which was proposed by the author of the present paper. A survey of papers related to the regularization of the differential equations of the two- and threebody problems is given. The original Newtonian equations of perturbed spatial restricted three-body problem are considered, and the problem of their regularization is posed; the energy relations and the differential equations describing the variations in the energies of the system in the perturbed spatial restricted three-body problem are given, as well as the first integrals of the differential equations of the unperturbed spatial restricted circular three-body problem (Jacobi integrals); the equations of perturbed spatial restricted three-body problem written in terms of rotating coordinate systems whose angular motion is described by the rotation quaternions (Euler (Rodrigues-Hamilton) parameters) are considered; and the differential equations for angular momenta in the restricted three-body problem are given. Local regular quaternion differential equations of perturbed spatial restricted three-body problem in the Kustaanheimo-Stiefel variables, i.e., equations regular in a neighborhood of the first and second body of finite mass, are obtained. The equations are systems of nonlinear nonstationary eleventhorder differential equations. These equations employ, as additional dependent variables, the energy characteristics of motion of the body under study (a body of a negligibly small mass) and the time whose derivative with respect to a new independent variable is equal to the distance from the body of negligibly small mass to the first or second body of finite mass. The equations obtained in the paper permit developing regular methods for determining solutions, in analytical or numerical form, of problems difficult for classicalmethods, such as the motion of a body of negligibly small mass in a neighborhood of the other two bodies of finite masses.

  8. Data mining to support simulation modeling of patient flow in hospitals.

    PubMed

    Isken, Mark W; Rajagopalan, Balaji

    2002-04-01

    Spiraling health care costs in the United States are driving institutions to continually address the challenge of optimizing the use of scarce resources. One of the first steps towards optimizing resources is to utilize capacity effectively. For hospital capacity planning problems such as allocation of inpatient beds, computer simulation is often the method of choice. One of the more difficult aspects of using simulation models for such studies is the creation of a manageable set of patient types to include in the model. The objective of this paper is to demonstrate the potential of using data mining techniques, specifically clustering techniques such as K-means, to help guide the development of patient type definitions for purposes of building computer simulation or analytical models of patient flow in hospitals. Using data from a hospital in the Midwest this study brings forth several important issues that researchers need to address when applying clustering techniques in general and specifically to hospital data.

  9. Fully distributed absolute blood flow velocity measurement for middle cerebral arteries using Doppler optical coherence tomography

    PubMed Central

    Qi, Li; Zhu, Jiang; Hancock, Aneeka M.; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D.; Chen, Zhongping

    2016-01-01

    Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it is related to vessel geometry. In this paper, we present a volumetric vessel reconstruction approach that is capable of measuring the absolute BFV distributed along the entire middle cerebral artery (MCA) within a large field-of-view. The Doppler angle at each point of the MCA, representing the vessel geometry, is derived analytically by localizing the artery from pure DOCT images through vessel segmentation and skeletonization. Our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches. Experiments on rodents using swept-source optical coherence tomography showed that our approach was able to reveal the consequences of permanent MCA occlusion with absolute BFV measurement. PMID:26977365

  10. Fully distributed absolute blood flow velocity measurement for middle cerebral arteries using Doppler optical coherence tomography.

    PubMed

    Qi, Li; Zhu, Jiang; Hancock, Aneeka M; Dai, Cuixia; Zhang, Xuping; Frostig, Ron D; Chen, Zhongping

    2016-02-01

    Doppler optical coherence tomography (DOCT) is considered one of the most promising functional imaging modalities for neuro biology research and has demonstrated the ability to quantify cerebral blood flow velocity at a high accuracy. However, the measurement of total absolute blood flow velocity (BFV) of major cerebral arteries is still a difficult problem since it is related to vessel geometry. In this paper, we present a volumetric vessel reconstruction approach that is capable of measuring the absolute BFV distributed along the entire middle cerebral artery (MCA) within a large field-of-view. The Doppler angle at each point of the MCA, representing the vessel geometry, is derived analytically by localizing the artery from pure DOCT images through vessel segmentation and skeletonization. Our approach could achieve automatic quantification of the fully distributed absolute BFV across different vessel branches. Experiments on rodents using swept-source optical coherence tomography showed that our approach was able to reveal the consequences of permanent MCA occlusion with absolute BFV measurement.

  11. Prevalence of traumatic brain injury in the general adult population: a meta-analysis.

    PubMed

    Frost, R Brock; Farrer, Thomas J; Primosch, Mark; Hedges, Dawson W

    2013-01-01

    Traumatic brain injury (TBI) is a significant public-health concern. To understand the extent of TBI, it is important to assess the prevalence of TBI in the general population. However, the prevalence of TBI in the general population can be difficult to measure because of differing definitions of TBI, differing TBI severity levels, and underreporting of sport-related TBI. Additionally, prevalence reports vary from study to study. In this present study, we used meta-analytic methods to estimate the prevalence of TBI in the adult general population. Across 15 studies, all originating from developed countries, which included 25,134 adults, 12% had a history of TBI. Men had more than twice the odds of having had a TBI than did women, suggesting that male gender is a risk factor for TBI. The adverse behavioral, cognitive and psychiatric effects associated with TBI coupled with the high prevalence of TBI identified in this study indicate that TBI is a considerable public and personal-health problem. Copyright © 2012 S. Karger AG, Basel.

  12. Governing Influence of Thermodynamic and Chemical Equilibria on the Interfacial Properties in Complex Fluids.

    PubMed

    Harikrishnan, A R; Dhar, Purbarun; Gedupudi, Sateesh; Das, Sarit K

    2018-04-12

    We propose a comprehensive analysis and a quasi-analytical mathematical formalism to predict the surface tension and contact angles of complex surfactant-infused nanocolloids. The model rests on the foundations of the interaction potentials for the interfacial adsorption-desorption dynamics in complex multicomponent colloids. Surfactant-infused nanoparticle-laden interface problems are difficult to deal with because of the many-body interactions and interfaces involved at the meso-nanoscales. The model is based on the governing role of thermodynamic and chemical equilibrium parameters in modulating the interfacial energies. The influence of parameters such as the presence of surfactants, nanoparticles, and surfactant-capped nanoparticles on interfacial dynamics is revealed by the analysis. Solely based on the knowledge of interfacial properties of independent surfactant solutions and nanocolloids, the same can be deduced for complex surfactant-based nanocolloids through the proposed approach. The model accurately predicts the equilibrium surface tension and contact angle of complex nanocolloids available in the existing literature and present experimental findings.

  13. Coalmine: an experience in building a system for social media analytics

    NASA Astrophysics Data System (ADS)

    White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.

    2012-06-01

    Social media networks make up a large percentage of the content available on the Internet and most of the time users spend online today is in interacting with them. All of the seemingly small pieces of information added by billions of people result in a enormous rapidly changing dataset. Searching, correlating, and understanding billions of individual posts is a significant technical problem; even the data from a single site such as Twitter can be difficult to manage. In this paper, we present Coalmine a social network data-mining system. We describe the overall architecture of Coalmine including the capture, storage and search components. We also describe our experience with pulling 150-350 GB of Twitter data per day through their REST API. Specifically, we discuss our experience with the evolution of the Twitter data APIs from 2011 to 2012 and present strategies for maximizing the amount of data collected. Finally, we describe our experiences looking for evidence of botnet command and control channels and examining patterns of SPAM in the Twitter dataset.

  14. Science deficiency in conservation practice: the monitoring of tiger populations in India

    USGS Publications Warehouse

    Karanth, K.U.; Nichols, J.D.; Seidensticker, J.; Dinerstein, Eric; Smith, J.L.D.; McDougal, C.; Johnsingh, A.J.T.; Chundawat, Raghunandan S.; Thapar, V.

    2003-01-01

    Conservation practices are supposed to get refined by advancing scientific knowledge. We study this phenomenon in the context of monitoring tiger populations in India, by evaluating the 'pugmark census method' employed by wildlife managers for three decades. We use an analytical framework of modem animal population sampling to test the efficacy of the pugmark censuses using scientific data on tigers and our field observations. We identify three critical goals for monitoring tiger populations, in order of increasing sophistication: (1) distribution mapping, (2) tracking relative abundance, (3) estimation of absolute abundance. We demonstrate that the present census-based paradigm does not work because it ignores the first two simpler goals, and targets, but fails to achieve, the most difficult third goal. We point out the utility and ready availability of alternative monitoring paradigms that deal with the central problems of spatial sampling and observability. We propose an alternative sampling-based approach that can be tailored to meet practical needs of tiger monitoring at different levels of refinement.

  15. A Comparison of Reduced Order Modeling Techniques Used in Dynamic Substructuring.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roettgen, Dan; Seegar, Ben; Tai, Wei Che

    Experimental dynamic substructuring is a means whereby a mathematical model for a substructure can be obtained experimentally and then coupled to a model for the rest of the assembly to predict the response. Recently, various methods have been proposed that use a transmission simulator to overcome sensitivity to measurement errors and to exercise the interface between the substructures; including the Craig-Bampton, Dual Craig-Bampton, and Craig-Mayes methods. This work compares the advantages and disadvantages of these reduced order modeling strategies for two dynamic substructuring problems. The methods are first used on an analytical beam model to validate the methodologies. Then theymore » are used to obtain an experimental model for structure consisting of a cylinder with several components inside connected to the outside case by foam with uncertain properties. This represents an exceedingly difficult structure to model and so experimental substructuring could be an attractive way to obtain a model of the system.« less

  16. A Comparison of Reduced Order Modeling Techniques Used in Dynamic Substructuring [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roettgen, Dan; Seeger, Benjamin; Tai, Wei Che

    Experimental dynamic substructuring is a means whereby a mathematical model for a substructure can be obtained experimentally and then coupled to a model for the rest of the assembly to predict the response. Recently, various methods have been proposed that use a transmission simulator to overcome sensitivity to measurement errors and to exercise the interface between the substructures; including the Craig-Bampton, Dual Craig-Bampton, and Craig-Mayes methods. This work compares the advantages and disadvantages of these reduced order modeling strategies for two dynamic substructuring problems. The methods are first used on an analytical beam model to validate the methodologies. Then theymore » are used to obtain an experimental model for structure consisting of a cylinder with several components inside connected to the outside case by foam with uncertain properties. This represents an exceedingly difficult structure to model and so experimental substructuring could be an attractive way to obtain a model of the system.« less

  17. The use of precession modulation for nutation control in spin-stabilized spacecraft

    NASA Technical Reports Server (NTRS)

    Taylor, J. M.; Donner, R. J.; Tasar, V.

    1974-01-01

    The relations which determine the nutation effects induced in a spinning spacecraft by periodic precession thrust pulses are derived analytically. By utilizing the idea that nutation need only be observed just before each precession thrust pulse, a difficult continuous-time derivation is replaced by a simple discrete-time derivation using z-transforms. The analytic results obtained are used to develop two types of modulated precession control laws which use the precession maneuver to concurrently control nutation. Results are illustrated by digital simulation of an actual spacecraft configuration.

  18. Semi-analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    NASA Astrophysics Data System (ADS)

    Lee, Gibbeum; Cho, Yeunwoo

    2018-01-01

    A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.

  19. A SIMPLE COLORIMETRIC METHOD TO DETECT BIOLOGICAL EVIDENCE OF HUMAN EXPOSURE TO MICROCYSTINS

    EPA Science Inventory

    Toxic cyanobacteria are contaminants of surface waters worldwide. Microcystins are some of the most commonly detected toxins. Biological evidence of human exposure may be difficult to obtain due to limitations associated with cost, laboratory capacity, analytic support, and exp...

  20. The National energy modeling system

    NASA Astrophysics Data System (ADS)

    The DOE uses a variety of energy and economic models to forecast energy supply and demand. It also uses a variety of more narrowly focussed analytical tools to examine energy policy options. For the purpose of the scope of this work, this set of models and analytical tools is called the National Energy Modeling System (NEMS). The NEMS is the result of many years of development of energy modeling and analysis tools, many of which were developed for different applications and under different assumptions. As such, NEMS is believed to be less than satisfactory in certain areas. For example, NEMS is difficult to keep updated and expensive to use. Various outputs are often difficult to reconcile. Products were not required to interface, but were designed to stand alone. Because different developers were involved, the inner workings of the NEMS are often not easily or fully understood. Even with these difficulties, however, NEMS comprises the best tools currently identified to deal with our global, national and regional energy modeling, and energy analysis needs.

  1. Growing geometric reasoning in solving problems of analytical geometry through the mathematical communication problems to state Islamic university students

    NASA Astrophysics Data System (ADS)

    Mujiasih; Waluya, S. B.; Kartono; Mariani

    2018-03-01

    Skills in working on the geometry problems great needs of the competence of Geometric Reasoning. As a teacher candidate, State Islamic University (UIN) students need to have the competence of this Geometric Reasoning. When the geometric reasoning in solving of geometry problems has grown well, it is expected the students are able to write their ideas to be communicative for the reader. The ability of a student's mathematical communication is supposed to be used as a marker of the growth of their Geometric Reasoning. Thus, the search for the growth of geometric reasoning in solving of analytic geometry problems will be characterized by the growth of mathematical communication abilities whose work is complete, correct and sequential, especially in writing. Preceded with qualitative research, this article was the result of a study that explores the problem: Was the search for the growth of geometric reasoning in solving analytic geometry problems could be characterized by the growth of mathematical communication abilities? The main activities in this research were done through a series of activities: (1) Lecturer trains the students to work on analytic geometry problems that were not routine and algorithmic process but many problems that the process requires high reasoning and divergent/open ended. (2) Students were asked to do the problems independently, in detail, complete, order, and correct. (3) Student answers were then corrected each its stage. (4) Then taken 6 students as the subject of this research. (5) Research subjects were interviewed and researchers conducted triangulation. The results of this research, (1) Mathematics Education student of UIN Semarang, had adequate the mathematical communication ability, (2) the ability of this mathematical communication, could be a marker of the geometric reasoning in solving of problems, and (3) the geometric reasoning of UIN students had grown in a category that tends to be good.

  2. Simultaneous determination of carotenoids, tocopherols, retinol and cholesterol in ovine lyophilised samples of milk, meat, and liver and in unprocessed/raw samples of fat.

    PubMed

    Bertolín, J R; Joy, M; Rufino-Moya, P J; Lobón, S; Blanco, M

    2018-08-15

    An accurate, fast, economic and simple method to determine carotenoids, tocopherols, retinol and cholesterol in lyophilised samples of ovine milk, muscle and liver and raw samples of fat, which are difficult to lyophilise, is sought. Those analytes have been studied in animal tissues to trace forage feeding and unhealthy contents. The sample treatment consisted of mild overnight saponification, liquid-liquid extraction, evaporation with vacuum evaporator and redissolution. The quantification of the different analytes was performed by the use of ultra-high performance liquid chromatography with diode-array detector for carotenoids, retinol and cholesterol and fluorescence detector for tocopherols. The retention times of the analytes were short and the resolution between analytes was very high. The limits of detection and quantification were very low. This method is suitable for all the matrices and analytes and could be adapted to other animal species with minor changes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. From Dr. Steven Ashby, Director of PNNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashby, Steven

    Powered by the creativity and imagination of more than 4,000 exceptional scientists, engineers and support professionals, at PNNL we advance the frontiers of science and address some of the most challenging problems in energy, the environment and national security. As DOE’s premier chemistry, environmental sciences and data analytics laboratory, we provide national leadership in four areas: deepening our understanding of climate science; inventing the future power grid; preventing nuclear proliferation; and speeding environmental remediation. Other areas where we make important contributions include energy storage, microbial biology and cyber security. PNNL also is home to EMSL (the Environmental Molecular Sciences Laboratory),more » one of DOE’s scientific user facilities. We apply these science strengths to address both national and international problems in complex adaptive systems that are too difficult for one institution to tackle alone. Take earth systems, for instance. The earth is a complex adaptive system because it involves everything from climate and microbial communities in the soil to emissions from cars and coal-powered industrial plants. All of these factors and others ultimately influence not only our environment and overall quality of life, but cause the earth to adapt in ways that must be further addressed. PNNL researchers are playing a vital role in finding solutions across every area of this complex adaptive system.« less

  4. Optimizing patient treatment decisions in an era of rapid technological advances: the case of hepatitis C treatment.

    PubMed

    Liu, Shan; Brandeau, Margaret L; Goldhaber-Fiebert, Jeremy D

    2017-03-01

    How long should a patient with a treatable chronic disease wait for more effective treatments before accepting the best available treatment? We develop a framework to guide optimal treatment decisions for a deteriorating chronic disease when treatment technologies are improving over time. We formulate an optimal stopping problem using a discrete-time, finite-horizon Markov decision process. The goal is to maximize a patient's quality-adjusted life expectancy. We derive structural properties of the model and analytically solve a three-period treatment decision problem. We illustrate the model with the example of treatment for chronic hepatitis C virus (HCV). Chronic HCV affects 3-4 million Americans and has been historically difficult to treat, but increasingly effective treatments have been commercialized in the past few years. We show that the optimal treatment decision is more likely to be to accept currently available treatment-despite expectations for future treatment improvement-for patients who have high-risk history, who are older, or who have more comorbidities. Insights from this study can guide HCV treatment decisions for individual patients. More broadly, our model can guide treatment decisions for curable chronic diseases by finding the optimal treatment policy for individual patients in a heterogeneous population.

  5. Optimizing Patient Treatment Decisions in an Era of Rapid Technological Advances: The Case of Hepatitis C Treatment

    PubMed Central

    Liu, Shan; Goldhaber-Fiebert, Jeremy D.; Brandeau, Margaret L.

    2015-01-01

    How long should a patient with a treatable chronic disease wait for more effective treatments before accepting the best available treatment? We develop a framework to guide optimal treatment decisions for a deteriorating chronic disease when treatment technologies are improving over time. We formulate an optimal stopping problem using a discrete-time, finite-horizon Markov decision process. The goal is to maximize a patient’s quality-adjusted life expectancy. We derive structural properties of the model and analytically solve a three-period treatment decision problem. We illustrate the model with the example of treatment for chronic hepatitis C virus (HCV). Chronic HCV affects 3–4 million Americans and has been historically difficult to treat, but increasingly effective treatments have been commercialized in the past few years. We show that the optimal treatment decision is more likely to be to accept currently available treatment—despite expectations for future treatment improvement—for patients who have high-risk history, who are older, or who have more comorbidities. Insights from this study can guide HCV treatment decisions for individual patients. More broadly, our model can guide treatment decisions for curable chronic diseases by finding the optimal treatment policy for individual patients in a heterogeneous population. PMID:26188961

  6. Intelligent classifier for dynamic fault patterns based on hidden Markov model

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Feng, Yuguang; Yu, Jinsong

    2006-11-01

    It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.

  7. [Studies of time-course changes in human body balance after ingestion of long-acting hypnotics].

    PubMed

    Nakamura, Masahiro; Ishii, Masanori; Niwa, Yoji; Yamazaki, Momoko; Ito, Hiroshi

    2004-02-01

    Falling accidents are a serious nocosomial problem, with balance disorders after the ingestion of hypnotics said to be a cause. Based on the results of animal studies, it was postulated that this problem involves the muscle relaxation that is a pharmacological effect of benzodiazepines (BZP). No reports have, to our knowledge, been made of time-course changes in human body balance after ingestion of hypnotics. Accordingly, we used quazepam (Doral), a long-acting hypnotic considered to show comparatively weak muscle relaxation, to study static balance after drug ingestion in human volunteers. Briefly, informed consent was obtained from 8 healthy adults, then a gait analytic system (Gangas) was used to test static balance after drug ingestion (Mann and Romberg tests). We also measured circulating drug concentration over time. Our results showed that balance disorders occurred after quazepam ingestion with an unstable posture particularly striking. Given the function of quazepam receptors, it is difficult to surmise that balance disorders after drug ingestion were due to the drug's muscle relaxation. We surmised that inhibition from the central nervous system in connection with nerves awakening was involved. We found a strong correlation between the manifestation of balance disorders after drug ingestion and circulating drug concentration.

  8. Putting the Aero Back into Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Bousman, William G.

    2000-01-01

    The lack of progress in understanding the physics of rotorcraft loads and vibration over the last 30 years is addressed in this paper. As befits this extraordinarily difficult problem, the reasons for the lack of progress are complicated and difficult to ascertain. It is proposed here that the difficulty lies within at least three areas: 1) a loss of perspective as to what are the key factors in rotor loads and vibration, 2) the overlooking of serious unsolved problems in the field, and 3) cultural barriers that impede progress. Some criteria are suggested for future research to provide a more concentrated focus on the problem.

  9. Putting the Aero Back Into Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Bousman, William G.; Aiken, Edwin W. (Technical Monitor)

    1999-01-01

    The lack of progress in understanding the physics of rotorcraft loads and vibration over the last 30 years is addressed in this paper. As befits this extraordinarily difficult problem, the reasons for the lack of progress are complicated and difficult to ascertain. It is proposed here that the difficulty lies within at least three areas: 1) a loss of perspective as to what are the key factors in rotor loads and vibration, 2) the overlooking of serious unsolved problems in the field, and 3) cultural barriers that impede progress. Some criteria are suggested for future research to provide a more concentrated focus on the problem.

  10. Analytical Tools in School Finance Reform.

    ERIC Educational Resources Information Center

    Johns, R. L.

    This paper discusses the problem of analyzing variations in the educational opportunities provided by different school districts and describes how to assess the impact of school finance alternatives through use of various analytical tools. The author first examines relatively simple analytical methods, including calculation of per-pupil…

  11. Learning Analytics: Challenges and Limitations

    ERIC Educational Resources Information Center

    Wilson, Anna; Watson, Cate; Thompson, Terrie Lynn; Drew, Valerie; Doyle, Sarah

    2017-01-01

    Learning analytic implementations are increasingly being included in learning management systems in higher education. We lay out some concerns with the way learning analytics--both data and algorithms--are often presented within an unproblematized Big Data discourse. We describe some potential problems with the often implicit assumptions about…

  12. Back analysis of geomechanical parameters in underground engineering using artificial bee colony.

    PubMed

    Zhu, Changxing; Zhao, Hongbo; Zhao, Ming

    2014-01-01

    Accurate geomechanical parameters are critical in tunneling excavation, design, and supporting. In this paper, a displacements back analysis based on artificial bee colony (ABC) algorithm is proposed to identify geomechanical parameters from monitored displacements. ABC was used as global optimal algorithm to search the unknown geomechanical parameters for the problem with analytical solution. To the problem without analytical solution, optimal back analysis is time-consuming, and least square support vector machine (LSSVM) was used to build the relationship between unknown geomechanical parameters and displacement and improve the efficiency of back analysis. The proposed method was applied to a tunnel with analytical solution and a tunnel without analytical solution. The results show the proposed method is feasible.

  13. Methods for geochemical analysis

    USGS Publications Warehouse

    Baedecker, Philip A.

    1987-01-01

    The laboratories for analytical chemistry within the Geologic Division of the U.S. Geological Survey are administered by the Office of Mineral Resources. The laboratory analysts provide analytical support to those programs of the Geologic Division that require chemical information and conduct basic research in analytical and geochemical areas vital to the furtherance of Division program goals. Laboratories for research and geochemical analysis are maintained at the three major centers in Reston, Virginia, Denver, Colorado, and Menlo Park, California. The Division has an expertise in a broad spectrum of analytical techniques, and the analytical research is designed to advance the state of the art of existing techniques and to develop new methods of analysis in response to special problems in geochemical analysis. The geochemical research and analytical results are applied to the solution of fundamental geochemical problems relating to the origin of mineral deposits and fossil fuels, as well as to studies relating to the distribution of elements in varied geologic systems, the mechanisms by which they are transported, and their impact on the environment.

  14. Analytical and experimental studies on detection of longitudinal, L and inverted T cracks in isotropic and bi-material beams based on changes in natural frequencies

    NASA Astrophysics Data System (ADS)

    Ravi, J. T.; Nidhan, S.; Muthu, N.; Maiti, S. K.

    2018-02-01

    An analytical method for determination of dimensions of longitudinal crack in monolithic beams, based on frequency measurements, has been extended to model L and inverted T cracks. Such cracks including longitudinal crack arise in beams made of layered isotropic or composite materials. A new formulation for modelling cracks in bi-material beams is presented. Longitudinal crack segment sizes, for L and inverted T cracks, varying from 2.7% to 13.6% of length of Euler-Bernoulli beams are considered. Both forward and inverse problems have been examined. In the forward problems, the analytical results are compared with finite element (FE) solutions. In the inverse problems, the accuracy of prediction of crack dimensions is verified using FE results as input for virtual testing. The analytical results show good agreement with the actual crack dimensions. Further, experimental studies have been done to verify the accuracy of the analytical method for prediction of dimensions of three types of crack in isotropic and bi-material beams. The results show that the proposed formulation is reliable and can be employed for crack detection in slender beam like structures in practice.

  15. Heat Transfer Analysis of Thermal Protection Structures for Hypersonic Vehicles

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Wang, Zhijin; Hou, Tianjiao

    2017-11-01

    This research aims to develop an analytical approach to study the heat transfer problem of thermal protection systems (TPS) for hypersonic vehicles. Laplace transform and integral method are used to describe the temperature distribution through the TPS subject to aerodynamic heating during flight. Time-dependent incident heat flux is also taken into account. Two different cases with heat flux and radiation boundary conditions are studied and discussed. The results are compared with those obtained by finite element analyses and show a good agreement. Although temperature profiles of such problems can be readily accessed via numerical simulations, analytical solutions give a greater insight into the physical essence of the heat transfer problem. Furthermore, with the analytical approach, rapid thermal analyses and even thermal optimization can be achieved during the preliminary TPS design.

  16. Efficient Credit Assignment through Evaluation Function Decomposition

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian; Turner, Kagan; Mikkulainen, Risto

    2005-01-01

    Evolutionary methods are powerful tools in discovering solutions for difficult continuous tasks. When such a solution is encoded over multiple genes, a genetic algorithm faces the difficult credit assignment problem of evaluating how a single gene in a chromosome contributes to the full solution. Typically a single evaluation function is used for the entire chromosome, implicitly giving each gene in the chromosome the same evaluation. This method is inefficient because a gene will get credit for the contribution of all the other genes as well. Accurately measuring the fitness of individual genes in such a large search space requires many trials. This paper instead proposes turning this single complex search problem into a multi-agent search problem, where each agent has the simpler task of discovering a suitable gene. Gene-specific evaluation functions can then be created that have better theoretical properties than a single evaluation function over all genes. This method is tested in the difficult double-pole balancing problem, showing that agents using gene-specific evaluation functions can create a successful control policy in 20 percent fewer trials than the best existing genetic algorithms. The method is extended to more distributed problems, achieving 95 percent performance gains over tradition methods in the multi-rover domain.

  17. Matched Interface and Boundary Method for Elasticity Interface Problems

    PubMed Central

    Wang, Bao; Xia, Kelin; Wei, Guo-Wei

    2015-01-01

    Elasticity theory is an important component of continuum mechanics and has had widely spread applications in science and engineering. Material interfaces are ubiquity in nature and man-made devices, and often give rise to discontinuous coefficients in the governing elasticity equations. In this work, the matched interface and boundary (MIB) method is developed to address elasticity interface problems. Linear elasticity theory for both isotropic homogeneous and inhomogeneous media is employed. In our approach, Lamé’s parameters can have jumps across the interface and are allowed to be position dependent in modeling isotropic inhomogeneous material. Both strong discontinuity, i.e., discontinuous solution, and weak discontinuity, namely, discontinuous derivatives of the solution, are considered in the present study. In the proposed method, fictitious values are utilized so that the standard central finite different schemes can be employed regardless of the interface. Interface jump conditions are enforced on the interface, which in turn, accurately determines fictitious values. We design new MIB schemes to account for complex interface geometries. In particular, the cross derivatives in the elasticity equations are difficult to handle for complex interface geometries. We propose secondary fictitious values and construct geometry based interpolation schemes to overcome this difficulty. Numerous analytical examples are used to validate the accuracy, convergence and robustness of the present MIB method for elasticity interface problems with both small and large curvatures, strong and weak discontinuities, and constant and variable coefficients. Numerical tests indicate second order accuracy in both L∞ and L2 norms. PMID:25914439

  18. New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.

  19. Transcript markers of herbicide stress in Arabidopsis and their cross-species extrapolation to Brassica

    EPA Science Inventory

    Low concentrations and short environmental persistence times of some herbicides make it difficult to develop analytical methods to detect herbicide residues in plants or soils. In contrast, genomics may provide tools to identify herbicide exposure to plants in field settings. Usi...

  20. Evaluation of alternative approaches for measuring n-octanol/water partition coefficients for methodologically challenging chemicals (MCCs)

    EPA Science Inventory

    Measurements of n-octanol/water partition coefficients (KOW) for highly hydrophobic chemicals, i.e., greater than 108, are extremely difficult and are rarely made, in part because the vanishingly small concentrations in the water phase require extraordinary analytical sensitivity...

  1. Curriculum Innovation for Marketing Analytics

    ERIC Educational Resources Information Center

    Wilson, Elizabeth J.; McCabe, Catherine; Smith, Robert S.

    2018-01-01

    College graduates need better preparation for and experience in data analytics for higher-quality problem solving. Using the curriculum innovation framework of Borin, Metcalf, and Tietje (2007) and case study research methods, we offer rich insights about one higher education institution's work to address the marketing analytics skills gap.…

  2. [Quality assurance in airway management: education and training for difficult airway management].

    PubMed

    Kaminoh, Yoshiroh

    2006-01-01

    Respiratory problem is one of the main causes of death or severe brain damage in perioperative period. Three major factors of respiratory problem are esophageal intubation, inadequate ventilation, and difficult airway. The wide spread of pulse oximeter and capnograph reduced the incidences of esophageal intubation and inadequate ventilation, but the difficult airway still occupies the large portion in the causes of adverse events during anesthesia. "Practice guideline for management of the difficult airway" was proposed by American Society of Anesthesiologists (ASA) in 1992 and 2002. Improvement of knowledge, technical skills, and cognitive skills are necessary for the education and training of the difficult airway management. "The practical seminar of difficult airway management (DAM practical seminar)" has been cosponsored by the Japanese Association of Medical Simulation (JAMS) in the 51 st and 52 nd annual meetings of Japanese Society of Anesthesiologists and the 24th annual meeting of Japanese Society for Clinical Anesthesia. The DAM practical seminar is composed of the lecture session for ASA difficult airway algorithm, the hands-on training session for technical skills, and the scenario-based training session for cognitive skills. Ninty six Japanese anesthesiologists have completed the DAM practical seminar in one year. "The DAM instructor course" should be immediately prepared to organize the seminar more frequently.

  3. Adequate mathematical modelling of environmental processes

    NASA Astrophysics Data System (ADS)

    Chashechkin, Yu. D.

    2012-04-01

    In environmental observations and laboratory visualization both large scale flow components like currents, jets, vortices, waves and a fine structure are registered (different examples are given). The conventional mathematical modeling both analytical and numerical is directed mostly on description of energetically important flow components. The role of a fine structures is still remains obscured. A variety of existing models makes it difficult to choose the most adequate and to estimate mutual assessment of their degree of correspondence. The goal of the talk is to give scrutiny analysis of kinematics and dynamics of flows. A difference between the concept of "motion" as transformation of vector space into itself with a distance conservation and the concept of "flow" as displacement and rotation of deformable "fluid particles" is underlined. Basic physical quantities of the flow that are density, momentum, energy (entropy) and admixture concentration are selected as physical parameters defined by the fundamental set which includes differential D'Alembert, Navier-Stokes, Fourier's and/or Fick's equations and closing equation of state. All of them are observable and independent. Calculations of continuous Lie groups shown that only the fundamental set is characterized by the ten-parametric Galilelian groups reflecting based principles of mechanics. Presented analysis demonstrates that conventionally used approximations dramatically change the symmetries of the governing equations sets which leads to their incompatibility or even degeneration. The fundamental set is analyzed taking into account condition of compatibility. A high order of the set indicated on complex structure of complete solutions corresponding to physical structure of real flows. Analytical solutions of a number problems including flows induced by diffusion on topography, generation of the periodic internal waves a compact sources in week-dissipative media as well as numerical solutions of the same problems are constructed. They include regular perturbed function describing large scale component and a rich family of singular perturbed function corresponding to fine flow components. Solutions are compared with data of laboratory experiments performed on facilities USU "HPC IPMec RAS" under support of Ministry of Education and Science RF (Goscontract No. 16.518.11.7059). Related problems of completeness and accuracy of laboratory and environmental measurements are discussed.

  4. Recent advances in computational-analytical integral transforms for convection-diffusion problems

    NASA Astrophysics Data System (ADS)

    Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.

    2017-10-01

    An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.

  5. Frechet derivatives for shallow water ocean acoustic inverse problems

    NASA Astrophysics Data System (ADS)

    Odom, Robert I.

    2003-04-01

    For any inverse problem, finding a model fitting the data is only half the problem. Most inverse problems of interest in ocean acoustics yield nonunique model solutions, and involve inevitable trade-offs between model and data resolution and variance. Problems of uniqueness and resolution and variance trade-offs can be addressed by examining the Frechet derivatives of the model-data functional with respect to the model variables. Tarantola [Inverse Problem Theory (Elsevier, Amsterdam, 1987), p. 613] published analytical formulas for the basic derivatives, e.g., derivatives of pressure with respect to elastic moduli and density. Other derivatives of interest, such as the derivative of transmission loss with respect to attenuation, can be easily constructed using the chain rule. For a range independent medium the analytical formulas involve only the Green's function and the vertical derivative of the Green's function for the medium. A crucial advantage of the analytical formulas for the Frechet derivatives over numerical differencing is that they can be computed with a single pass of any program which supplies the Green's function. Various derivatives of interest in shallow water ocean acoustics are presented and illustrated by an application to the sensitivity of measured pressure to shallow water sediment properties. [Work supported by ONR.

  6. Solving Differential Equations Analytically. Elementary Differential Equations. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 335.

    ERIC Educational Resources Information Center

    Goldston, J. W.

    This unit introduces analytic solutions of ordinary differential equations. The objective is to enable the student to decide whether a given function solves a given differential equation. Examples of problems from biology and chemistry are covered. Problem sets, quizzes, and a model exam are included, and answers to all items are provided. The…

  7. Drilling Regolith: Why Is It So Difficult?

    NASA Astrophysics Data System (ADS)

    Schmitt, H. H.

    2017-10-01

    The Apollo rotary percussive drill system penetrated the lunar regolith with reasonable efficiency; however, extraction of the drill core stem proved to be very difficult on all three missions. Retractable drill stem flutes may solve this problem.

  8. Analytic theory of the selection mechanism in the Saffman-Taylor problem. [concerning shape of fingers in Hele-Shaw cell

    NASA Technical Reports Server (NTRS)

    Hong, D. C.; Langer, J. S.

    1986-01-01

    An analytic approach to the problem of predicting the widths of fingers in a Hele-Shaw cell is presented. The analysis is based on the WKB technique developed recently for dealing with the effects of surface tension in the problem of dendritic solidification. It is found that the relation between the dimensionless width lambda and the dimensionless group of parameters containing the surface tension, nu, has the form lambda - 1/2 = nu exp 2/3 in the limit of small nu.

  9. Everyday beliefs about sources of advice for the parents of difficult children.

    PubMed

    Sonuga-Barke, E J; Thompson, M; Balding, J

    1993-01-01

    Parents were asked which sources of advice families with difficult children should seek. The results suggested a similar hierarchy of agencies for both boys and girls with emotional and behavioural problems. Teachers, doctors, child psychiatrists and health visitors all received strong positive ratings, books about children with problems received moderate positive ratings, religious leaders received the strongest negative ratings and grandparents and friends received neutral ratings. Implications for service provision are discussed.

  10. A child with a difficult airway: what do I do next?

    PubMed

    Engelhardt, Thomas; Weiss, Markus

    2012-06-01

    Difficulties in pediatric airway management are common and continue to result in significant morbidity and mortality. This review reports on current concepts in approaching a child with a difficult airway. Routine airway management in healthy children with normal airways is simple in experienced hands. Mask ventilation (oxygenation) is always possible and tracheal intubation normally simple. However, transient hypoxia is common in these children usually due to unexpected anatomical and functional airway problems or failure to ventilate during rapid sequence induction. Anatomical airway problems (upper airway collapse and adenoid hypertrophy) and functional airway problems (laryngospasm, bronchospasm, insufficient depth of anesthesia and muscle rigidity, gastric hyperinflation, and alveolar collapse) require urgent recognition and treatment algorithms due to insufficient oxygen reserves. Early muscle paralysis and epinephrine administration aids resolution of these functional airway obstructions. Children with an 'impaired' normal (foreign body, allergy, and inflammation) or an expected difficult (scars, tumors, and congenital) airway require careful planning and expertise. Training in the recognition and management of these different situations as well as a suitably equipped anesthesia workstation and trained personnel are essential. The healthy child with an unexpected airway problem requires clear strategies. The 'impaired' normal pediatric airway may be handled by anesthetists experienced with children, whereas the expected difficult pediatric airway requires dedicated pediatric anesthesia specialist care and should only be managed in specialized centers.

  11. Analytic Cognitive Style Predicts Religious and Paranormal Belief

    ERIC Educational Resources Information Center

    Pennycook, Gordon; Cheyne, James Allan; Seli, Paul; Koehler, Derek J.; Fugelsang, Jonathan A.

    2012-01-01

    An analytic cognitive style denotes a propensity to set aside highly salient intuitions when engaging in problem solving. We assess the hypothesis that an analytic cognitive style is associated with a history of questioning, altering, and rejecting (i.e., unbelieving) supernatural claims, both religious and paranormal. In two studies, we examined…

  12. When Your Child Is Difficult: Solve Your Toughest Child-Raising Problems with a Four-Step Plan That Works.

    ERIC Educational Resources Information Center

    Silberman, Mel

    Written for parents, this book discusses four steps for dealing with children's difficult behavior. The book is divided into two parts. Part 1, "The Building Blocks," discusses baseline perspectives parents need to establish in order to effectively deal with difficult behavior. Topics covered include: (1) parents' dual roles as caregivers and…

  13. An analytically solvable three-body break-up model problem in hyperspherical coordinates

    NASA Astrophysics Data System (ADS)

    Ancarani, L. U.; Gasaneo, G.; Mitnik, D. M.

    2012-10-01

    An analytically solvable S-wave model for three particles break-up processes is presented. The scattering process is represented by a non-homogeneous Coulombic Schrödinger equation where the driven term is given by a Coulomb-like interaction multiplied by the product of a continuum wave function and a bound state in the particles coordinates. The closed form solution is derived in hyperspherical coordinates leading to an analytic expression for the associated scattering transition amplitude. The proposed scattering model contains most of the difficulties encountered in real three-body scattering problem, e.g., non-separability in the electrons' spherical coordinates and Coulombic asymptotic behavior. Since the coordinates' coupling is completely different, the model provides an alternative test to that given by the Temkin-Poet model. The knowledge of the analytic solution provides an interesting benchmark to test numerical methods dealing with the double continuum, in particular in the asymptotic regions. An hyperspherical Sturmian approach recently developed for three-body collisional problems is used to reproduce to high accuracy the analytical results. In addition to this, we generalized the model generating an approximate wave function possessing the correct radial asymptotic behavior corresponding to an S-wave three-body Coulomb problem. The model allows us to explore the typical structure of the solution of a three-body driven equation, to identify three regions (the driven, the Coulombic and the asymptotic), and to analyze how far one has to go to extract the transition amplitude.

  14. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  15. Analysis of composite ablators using massively parallel computation

    NASA Technical Reports Server (NTRS)

    Shia, David

    1995-01-01

    In this work, the feasibility of using massively parallel computation to study the response of ablative materials is investigated. Explicit and implicit finite difference methods are used on a massively parallel computer, the Thinking Machines CM-5. The governing equations are a set of nonlinear partial differential equations. The governing equations are developed for three sample problems: (1) transpiration cooling, (2) ablative composite plate, and (3) restrained thermal growth testing. The transpiration cooling problem is solved using a solution scheme based solely on the explicit finite difference method. The results are compared with available analytical steady-state through-thickness temperature and pressure distributions and good agreement between the numerical and analytical solutions is found. It is also found that a solution scheme based on the explicit finite difference method has the following advantages: incorporates complex physics easily, results in a simple algorithm, and is easily parallelizable. However, a solution scheme of this kind needs very small time steps to maintain stability. A solution scheme based on the implicit finite difference method has the advantage that it does not require very small times steps to maintain stability. However, this kind of solution scheme has the disadvantages that complex physics cannot be easily incorporated into the algorithm and that the solution scheme is difficult to parallelize. A hybrid solution scheme is then developed to combine the strengths of the explicit and implicit finite difference methods and minimize their weaknesses. This is achieved by identifying the critical time scale associated with the governing equations and applying the appropriate finite difference method according to this critical time scale. The hybrid solution scheme is then applied to the ablative composite plate and restrained thermal growth problems. The gas storage term is included in the explicit pressure calculation of both problems. Results from ablative composite plate problems are compared with previous numerical results which did not include the gas storage term. It is found that the through-thickness temperature distribution is not affected much by the gas storage term. However, the through-thickness pressure and stress distributions, and the extent of chemical reactions are different from the previous numerical results. Two types of chemical reaction models are used in the restrained thermal growth testing problem: (1) pressure-independent Arrhenius type rate equations and (2) pressure-dependent Arrhenius type rate equations. The numerical results are compared to experimental results and the pressure-dependent model is able to capture the trend better than the pressure-independent one. Finally, a performance study is done on the hybrid algorithm using the ablative composite plate problem. It is found that there is a good speedup of performance on the CM-5. For 32 CPU's, the speedup of performance is 20. The efficiency of the algorithm is found to be a function of the size and execution time of a given problem and the effective parallelization of the algorithm. It also seems that there is an optimum number of CPU's to use for a given problem.

  16. Strategies in Forecasting Outcomes in Ethical Decision-making: Identifying and Analyzing the Causes of the Problem

    PubMed Central

    Beeler, Cheryl K.; Antes, Alison L.; Wang, Xiaoqian; Caughron, Jared J.; Thiel, Chase E.; Mumford, Michael D.

    2010-01-01

    This study examined the role of key causal analysis strategies in forecasting and ethical decision-making. Undergraduate participants took on the role of the key actor in several ethical problems and were asked to identify and analyze the causes, forecast potential outcomes, and make a decision about each problem. Time pressure and analytic mindset were manipulated while participants worked through these problems. The results indicated that forecast quality was associated with decision ethicality, and the identification of the critical causes of the problem was associated with both higher quality forecasts and higher ethicality of decisions. Neither time pressure nor analytic mindset impacted forecasts or ethicality of decisions. Theoretical and practical implications of these findings are discussed. PMID:20352056

  17. Analytical solutions for sequentially coupled one-dimensional reactive transport problems Part I: Mathematical derivations

    NASA Astrophysics Data System (ADS)

    Srinivasan, V.; Clement, T. P.

    2008-02-01

    Multi-species reactive transport equations coupled through sorption and sequential first-order reactions are commonly used to model sites contaminated with radioactive wastes, chlorinated solvents and nitrogenous species. Although researchers have been attempting to solve various forms of these reactive transport equations for over 50 years, a general closed-form analytical solution to this problem is not available in the published literature. In Part I of this two-part article, we derive a closed-form analytical solution to this problem for spatially-varying initial conditions. The proposed solution procedure employs a combination of Laplace and linear transform methods to uncouple and solve the system of partial differential equations. Two distinct solutions are derived for Dirichlet and Cauchy boundary conditions each with Bateman-type source terms. We organize and present the final solutions in a common format that represents the solutions to both boundary conditions. In addition, we provide the mathematical concepts for deriving the solution within a generic framework that can be used for solving similar transport problems.

  18. Why does the sign problem occur in evaluating the overlap of HFB wave functions?

    NASA Astrophysics Data System (ADS)

    Mizusaki, Takahiro; Oi, Makito; Shimizu, Noritaka

    2018-04-01

    For the overlap matrix element between Hartree-Fock-Bogoliubov states, there are two analytically different formulae: one with the square root of the determinant (the Onishi formula) and the other with the Pfaffian (Robledo's Pfaffian formula). The former formula is two-valued as a complex function, hence it leaves the sign of the norm overlap undetermined (i.e., the so-called sign problem of the Onishi formula). On the other hand, the latter formula does not suffer from the sign problem. The derivations for these two formulae are so different that the reasons are obscured why the resultant formulae possess different analytical properties. In this paper, we discuss the reason why the difference occurs by means of the consistent framework, which is based on the linked cluster theorem and the product-sum identity for the Pfaffian. Through this discussion, we elucidate the source of the sign problem in the Onishi formula. We also point out that different summation methods of series expansions may result in analytically different formulae.

  19. Topics in Chemical Instrumentation: An Introduction to Supercritical Fluid Chromatography--Part 2. Applications and Future Trends.

    ERIC Educational Resources Information Center

    Palmieri, Margo D.

    1989-01-01

    Discussed are selected application and future trends in supercritical fluid chromatography (SFC). The greatest application for SFC involves those analytes that are difficult to separate using GC or LC methods. Optimum conditions for SFC are examined. Provided are several example chromatograms. (MVL)

  20. A Balanced Accuracy Fitness Function Leads to Robust Analysis Using Grammatical Evolution Neural Networks in the Case of Class Imbalance

    EPA Science Inventory

    The identification and characterization of genetic and environmental factors that predict common, complex disease is a major goal of human genetics. The ubiquitous nature of epistatic interaction in the underlying genetic etiology of such disease presents a difficult analytical ...

  1. Learning to See: Enhancing Student Learning through Videotaped Feedback

    ERIC Educational Resources Information Center

    Yakura, Elaine K.

    2009-01-01

    Feedback is crucial to developing skills, but meaningful feedback is difficult to provide. Classroom videotaping can provide effective feedback on student performance, but for video feedback to be most helpful, students must develop a type of "visual intelligence"--analytical skills that increase critical thinking and self-awareness. The author…

  2. Dynamic Soaring: Aerodynamics for Albatrosses

    ERIC Educational Resources Information Center

    Denny, Mark

    2009-01-01

    Albatrosses have evolved to soar and glide efficiently. By maximizing their lift-to-drag ratio "L/D", albatrosses can gain energy from the wind and can travel long distances with little effort. We simplify the difficult aerodynamic equations of motion by assuming that albatrosses maintain a constant "L/D". Analytic solutions to the simplified…

  3. Keystroke Logging in Writing Research: Using Inputlog to Analyze and Visualize Writing Processes

    ERIC Educational Resources Information Center

    Leijten, Marielle; Van Waes, Luuk

    2013-01-01

    Keystroke logging has become instrumental in identifying writing strategies and understanding cognitive processes. Recent technological advances have refined logging efficiency and analytical outputs. While keystroke logging allows for ecological data collection, it is often difficult to connect the fine grain of logging data to the underlying…

  4. Colour Mathematics: With Graphs and Numbers

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2009-01-01

    The different combinations involved in additive and subtractive colour mixing can often be difficult for students to remember. Using transmission graphs for filters of the primary colours and a numerical scheme to write out the relationships are good exercises in analytical thinking that can help students recall the combinations rather than just…

  5. Laboratory Investigations Of Mechanisms For 1,4-Dioxane Destruction By Ozone In Water (Presentation)

    EPA Science Inventory

    Advances in analytical detection methods have made it possible to quantify 1,4-dioxane contamination in groundwater, even a well-characterized sites where it had not been previously detected. Although 1,4-dioxane is difficult to treat because of its chemical and physical propert...

  6. Surprises and insights from long-term aquatic datasets and experiments

    Treesearch

    Walter K. Dodds; Christopher T. Robinson; Evelyn E. Gaiser; Gretchen J.A. Hansen; Heather Powell; Joseph M. Smith; Nathaniel B. Morse; Sherri L. Johnson; Stanley V. Gregory; Tisza Bell; Timothy K. Kratz; William H. McDowell

    2012-01-01

    Long-term research on freshwater ecosystems provides insights that can be difficult to obtain from other approaches. Widespread monitoring of ecologically relevant water-quality parameters spanning decades can facilitate important tests of ecological principles. Unique long-term data sets and analytical tools are increasingly available, allowing for powerful and...

  7. A conceptual framework to support exposure science research and complete the source-to-outcome continuum for risk assessment

    EPA Science Inventory

    While knowledge of exposure is fundamental to assessing and mitigating risks, exposure information has been costly and difficult to generate. Driven by major scientific advances in analytical methods, biomonitoring, computational tools, and a newly articulated vision for a great...

  8. Laboratory Investigation Of Mechanisms For 1,4-Dioxane Destruction By Ozone In Water

    EPA Science Inventory

    Advances in analytical detection methods have made it possible to quantify 1,4-dioxane contamination in groundwater, even at well-characterized sites where it had not been previously detected. Although 1,4-dioxane is difficult to treat because of its chemical and physical proper...

  9. Applying the Bootstrap to Taxometric Analysis: Generating Empirical Sampling Distributions to Help Interpret Results

    ERIC Educational Resources Information Center

    Ruscio, John; Ruscio, Ayelet Meron; Meron, Mati

    2007-01-01

    Meehl's taxometric method was developed to distinguish categorical and continuous constructs. However, taxometric output can be difficult to interpret because expected results for realistic data conditions and differing procedural implementations have not been derived analytically or studied through rigorous simulations. By applying bootstrap…

  10. Solution of magnetic field and eddy current problem induced by rotating magnetic poles (abstract)

    NASA Astrophysics Data System (ADS)

    Liu, Z. J.; Low, T. S.

    1996-04-01

    The magnetic field and eddy current problems induced by rotating permanent magnet poles occur in electromagnetic dampers, magnetic couplings, and many other devices. Whereas numerical techniques, for example, finite element methods can be exploited to study various features of these problems, such as heat generation and drag torque development, etc., the analytical solution is always of interest to the designers since it helps them to gain the insight into the interdependence of the parameters involved and provides an efficient tool for designing. Some of the previous work showed that the solution of the eddy current problem due to the linearly moving magnet poles can give satisfactory approximation for the eddy current problem due to rotating fields. However, in many practical cases, especially when the number of magnet poles is small, there is significant effect of flux focusing due to the geometry. The above approximation can therefore lead to marked errors in the theoretical predictions of the device performance. Bernot et al. recently described an analytical solution in a polar coordinate system where the radial field is excited by a time-varying source. A discussion of an analytical solution of the magnetic field and eddy current problems induced by moving magnet poles in radial field machines will be given in this article. The theoretical predictions obtained from this method is compared with the results obtained from finite element calculations. The validity of the method is also checked by the comparison of the theoretical predictions and the measurements from a test machine. It is shown that the introduced solution leads to a significant improvement in the air gap field prediction as compared with the results obtained from the analytical solution that models the eddy current problems induced by linearly moving magnet poles.

  11. Problem of glider models

    NASA Technical Reports Server (NTRS)

    Lippisch, Espenlaub

    1922-01-01

    Any one endeavoring to solve the problem of soaring flight is confronted not only by structural difficulties, but also by the often far more difficult aerodynamic problem of flight properties and efficiency, which can only be determined by experimenting with the finished glider.

  12. Comparison of Rating Scales in the Development of Patient-Reported Outcome Measures for Children with Eye Disorders.

    PubMed

    Hatt, Sarah R; Leske, David A; Wernimont, Suzanne M; Birch, Eileen E; Holmes, Jonathan M

    2017-03-01

    A rating scale is a critical component of patient-reported outcome instrument design, but the optimal rating scale format for pediatric use has not been investigated. We compared rating scale performance when administering potential questionnaire items to children with eye disorders and their parents. Three commonly used rating scales were evaluated: frequency (never, sometimes, often, always), severity (not at all, a little, some, a lot), and difficulty (not difficult, a little difficult, difficult, very difficult). Ten patient-derived items were formatted for each rating scale, and rating scale testing order was randomized. Both child and parent were asked to comment on any problems with, or a preference for, a particular scale. Any confusion about options or inability to answer was recorded. Twenty-one children, aged 5-17 years, with strabismus, amblyopia, or refractive error were recruited, each with one of their parents. Of the first 10 children, 4 (40%) had problems using the difficulty scale, compared with 1 (10%) using frequency, and none using severity. The difficulty scale was modified, replacing the word "difficult" with "hard." Eleven additional children (plus parents) then completed all 3 questionnaires. No children had problems using any scale. Four (36%) parents had problems using the difficulty ("hard") scale and 1 (9%) with frequency. Regarding preference, 6 (55%) of 11 children and 5 (50%) of 10 parents preferred using the frequency scale. Children and parents found the frequency scale and question format to be the most easily understood. Children and parents also expressed preference for the frequency scale, compared with the difficulty and severity scales. We recommend frequency rating scales for patient-reported outcome measures in pediatric populations.

  13. A multigroup radiation diffusion test problem: Comparison of code results with analytic solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shestakov, A I; Harte, J A; Bolstad, J H

    2006-12-21

    We consider a 1D, slab-symmetric test problem for the multigroup radiation diffusion and matter energy balance equations. The test simulates diffusion of energy from a hot central region. Opacities vary with the cube of the frequency and radiation emission is given by a Wien spectrum. We compare results from two LLNL codes, Raptor and Lasnex, with tabular data that define the analytic solution.

  14. HIPPI: highly accurate protein family classification with ensembles of HMMs.

    PubMed

    Nguyen, Nam-Phuong; Nute, Michael; Mirarab, Siavash; Warnow, Tandy

    2016-11-11

    Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification). HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  15. Structured decision making as a framework for large-scale wildlife harvest management decisions

    USGS Publications Warehouse

    Robinson, Kelly F.; Fuller, Angela K.; Hurst, Jeremy E.; Swift, Bryan L.; Kirsch, Arthur; Farquhar, James F.; Decker, Daniel J.; Siemer, William F.

    2016-01-01

    Fish and wildlife harvest management at large spatial scales often involves making complex decisions with multiple objectives and difficult tradeoffs, population demographics that vary spatially, competing stakeholder values, and uncertainties that might affect management decisions. Structured decision making (SDM) provides a formal decision analytic framework for evaluating difficult decisions by breaking decisions into component parts and separating the values of stakeholders from the scientific evaluation of management actions and uncertainty. The result is a rigorous, transparent, and values-driven process. This decision-aiding process provides the decision maker with a more complete understanding of the problem and the effects of potential management actions on stakeholder values, as well as how key uncertainties can affect the decision. We use a case study to illustrate how SDM can be used as a decision-aiding tool for management decision making at large scales. We evaluated alternative white-tailed deer (Odocoileus virginianus) buck-harvest regulations in New York designed to reduce harvest of yearling bucks, taking into consideration the values of the state wildlife agency responsible for managing deer, as well as deer hunters. We incorporated tradeoffs about social, ecological, and economic management concerns throughout the state. Based on the outcomes of predictive models, expert elicitation, and hunter surveys, the SDM process identified management alternatives that optimized competing objectives. The SDM process provided biologists and managers insight about aspects of the buck-harvest decision that helped them adopt a management strategy most compatible with diverse hunter values and management concerns.

  16. Using Technology to Meet the Developmental Needs of Deaf Students To Improve Their Mathematical Word Problem Solving Skills.

    ERIC Educational Resources Information Center

    Kelly, Ronald R.

    2003-01-01

    Presents "Project Solve," a web-based problem-solving instruction and guided practice for mathematical word problems. Discusses implications for college students for whom reading and comprehension of mathematical word problem solving are difficult, especially learning disabled students. (Author/KHR)

  17. Must "Hard Problems" Be Hard?

    ERIC Educational Resources Information Center

    Kolata, Gina

    1985-01-01

    To determine how hard it is for computers to solve problems, researchers have classified groups of problems (polynomial hierarchy) according to how much time they seem to require for their solutions. A difficult and complex proof is offered which shows that a combinatorial approach (using Boolean circuits) may resolve the problem. (JN)

  18. The Interference of Stereotype Threat with Women's Generation of Mathematical Problem-Solving Strategies.

    ERIC Educational Resources Information Center

    Quinn, Diane M.; Spencer, Steven J.

    2001-01-01

    Investigated whether stereotype threat would depress college women's math performance. In one test, men outperformed women when solving word problems, though women performed equally when problems were converted into numerical equivalents. In another test, participants solved difficult problems in high or reduced stereotype threat conditions. Women…

  19. The Role of Expository Writing in Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Craig, Tracy S.

    2016-01-01

    Mathematical problem-solving is notoriously difficult to teach in a standard university mathematics classroom. The project on which this article reports aimed to investigate the effect of the writing of explanatory strategies in the context of mathematical problem solving on problem-solving behaviour. This article serves to describe the…

  20. Using KIE To Help Students Develop Shared Criteria for House Designs.

    ERIC Educational Resources Information Center

    Cuthbert, Alex; Hoadley, Christopher M.

    How can students develop shared criteria for problems that have no "right" answer? Ill-structured problems of this sort are called design problems. Like portfolio projects, these problems are difficult to evaluate for both teachers and students. This investigation contrasts two methods for developing shared criteria for project…

  1. Derivation of phase functions from multiply scattered sunlight transmitted through a hazy atmosphere

    NASA Technical Reports Server (NTRS)

    Weinman, J. A.; Twitty, J. T.; Browning, S. R.; Herman, B. M.

    1975-01-01

    The intensity of sunlight multiply scattered in model atmospheres is derived from the equation of radiative transfer by an analytical small-angle approximation. The approximate analytical solutions are compared to rigorous numerical solutions of the same problem. Results obtained from an aerosol-laden model atmosphere are presented. Agreement between the rigorous and the approximate solutions is found to be within a few per cent. The analytical solution to the problem which considers an aerosol-laden atmosphere is then inverted to yield a phase function which describes a single scattering event at small angles. The effect of noisy data on the derived phase function is discussed.

  2. Swarm intelligence metaheuristics for enhanced data analysis and optimization.

    PubMed

    Hanrahan, Grady

    2011-09-21

    The swarm intelligence (SI) computing paradigm has proven itself as a comprehensive means of solving complicated analytical chemistry problems by emulating biologically-inspired processes. As global optimum search metaheuristics, associated algorithms have been widely used in training neural networks, function optimization, prediction and classification, and in a variety of process-based analytical applications. The goal of this review is to provide readers with critical insight into the utility of swarm intelligence tools as methods for solving complex chemical problems. Consideration will be given to algorithm development, ease of implementation and model performance, detailing subsequent influences on a number of application areas in the analytical, bioanalytical and detection sciences.

  3. Preformulation considerations for controlled release dosage forms. Part II. Selected candidate support.

    PubMed

    Chrzanowski, Frank

    2008-01-01

    Practical examples of preformulation support of the form selected for formulation development are provided using several drug substances (DSs). The examples include determination of the solubilities vs. pH particularly for the range pH 1 to 8 because of its relationship to gastrointestinal (GI) conditions and dissolution method development. The advantages of equilibrium solubility and trial solubility methods are described. The equilibrium method is related to detecting polymorphism and the trial solubility method, to simplifying difficult solubility problems. An example of two polymorphs existing in mixtures of DS is presented in which one of the forms is very unstable. Accelerating stability studies are used in conjunction with HPLC and quantitative X-ray powder diffraction (QXRD) to demonstrate the differences in chemical and polymorphic stabilities. The results from two model excipient compatibility methods are compared to determine which has better predictive accuracy for room temperature stability. A DSC (calorimetric) method and an isothermal stress with quantitative analysis (ISQA) method that simulates wet granulation conditions were compared using a 2 year room temperature sample set as reference. An example of a pH stability profile for understanding stability and extrapolating stability to other environments is provided. The pH-stability of omeprazole and lansoprazole, which are extremely unstable in acidic and even mildly acidic conditions, are related to the formulation of delayed release dosage forms and the resolution of the problem associated with free carboxyl groups from the enteric coating polymers reacting with the DSs. Dissolution method requirements for CR dosage forms are discussed. The applicability of a modified disintegration time (DT) apparatus for supporting CR dosage form development of a pH sensitive DS at a specific pH such as duodenal pH 5.6 is related. This method is applicable for DSs such as peptides, proteins, enzymes and natural products where physical observation can be used in place of a difficult to perform analytical method, saving resources and providing rapid preformulation support.

  4. Optimum design of structures subject to general periodic loads

    NASA Technical Reports Server (NTRS)

    Reiss, Robert; Qian, B.

    1989-01-01

    A simplified version of Icerman's problem regarding the design of structures subject to a single harmonic load is discussed. The nature of the restrictive conditions that must be placed on the design space in order to ensure an analytic optimum are discussed in detail. Icerman's problem is then extended to include multiple forcing functions with different driving frequencies. And the conditions that now must be placed upon the design space to ensure an analytic optimum are again discussed. An important finding is that all solutions to the optimality condition (analytic stationary design) are local optima, but the global optimum may well be non-analytic. The more general problem of distributing the fixed mass of a linear elastic structure subject to general periodic loads in order to minimize some measure of the steady state deflection is also considered. This response is explicitly expressed in terms of Green's functional and the abstract operators defining the structure. The optimality criterion is derived by differentiating the response with respect to the design parameters. The theory is applicable to finite element as well as distributed parameter models.

  5. A Simple Analytical Model for Magnetization and Coercivity of Hard/Soft Nanocomposite Magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jihoon; Hong, Yang-Ki; Lee, Woncheol

    Here, we present a simple analytical model to estimate the magnetization (σ s) and intrinsic coercivity (Hci) of a hard/soft nanocomposite magnet using the mass fraction. Previously proposed models are based on the volume fraction of the hard phase of the composite. But, it is difficult to measure the volume of the hard or soft phase material of a composite. We synthesized Sm 2Co 7/Fe-Co, MnAl/Fe-Co, MnBi/Fe-Co, and BaFe 12O 19/Fe-Co composites for characterization of their σs and Hci. The experimental results are in good agreement with the present model. Therefore, this analytical model can be extended to predict themore » maximum energy product (BH) max of hard/soft composite.« less

  6. A Simple Analytical Model for Magnetization and Coercivity of Hard/Soft Nanocomposite Magnets

    DOE PAGES

    Park, Jihoon; Hong, Yang-Ki; Lee, Woncheol; ...

    2017-07-10

    Here, we present a simple analytical model to estimate the magnetization (σ s) and intrinsic coercivity (Hci) of a hard/soft nanocomposite magnet using the mass fraction. Previously proposed models are based on the volume fraction of the hard phase of the composite. But, it is difficult to measure the volume of the hard or soft phase material of a composite. We synthesized Sm 2Co 7/Fe-Co, MnAl/Fe-Co, MnBi/Fe-Co, and BaFe 12O 19/Fe-Co composites for characterization of their σs and Hci. The experimental results are in good agreement with the present model. Therefore, this analytical model can be extended to predict themore » maximum energy product (BH) max of hard/soft composite.« less

  7. L-hop percolation on networks with arbitrary degree distributions and its applications

    NASA Astrophysics Data System (ADS)

    Shang, Yilun; Luo, Weiliang; Xu, Shouhuai

    2011-09-01

    Site percolation has been used to help understand analytically the robustness of complex networks in the presence of random node deletion (or failure). In this paper we move a further step beyond random node deletion by considering that a node can be deleted because it is chosen or because it is within some L-hop distance of a chosen node. Using the generating functions approach, we present analytic results on the percolation threshold as well as the mean size, and size distribution, of nongiant components of complex networks under such operations. The introduction of parameter L is both conceptually interesting because it accommodates a sort of nonindependent node deletion, which is often difficult to tackle analytically, and practically interesting because it offers useful insights for cybersecurity (such as botnet defense).

  8. The Purpose of Analytical Models from the Perspective of a Data Provider.

    ERIC Educational Resources Information Center

    Sheehan, Bernard S.

    The purpose of analytical models is to reduce complex institutional management problems and situations to simpler proportions and compressed time frames so that human skills of decision makers can be brought to bear most effectively. Also, modeling cultivates the art of management by forcing explicit and analytical consideration of important…

  9. Odor Recognition vs. Classification in Artificial Olfaction

    NASA Astrophysics Data System (ADS)

    Raman, Baranidharan; Hertz, Joshua; Benkstein, Kurt; Semancik, Steve

    2011-09-01

    Most studies in chemical sensing have focused on the problem of precise identification of chemical species that were exposed during the training phase (the recognition problem). However, generalization of training to predict the chemical composition of untrained gases based on their similarity with analytes in the training set (the classification problem) has received very limited attention. These two analytical tasks pose conflicting constraints on the system. While correct recognition requires detection of molecular features that are unique to an analyte, generalization to untrained chemicals requires detection of features that are common across a desired class of analytes. A simple solution that addresses both issues simultaneously can be obtained from biological olfaction, where the odor class and identity information are decoupled and extracted individually over time. Mimicking this approach, we proposed a hierarchical scheme that allowed initial discrimination between broad chemical classes (e.g. contains oxygen) followed by finer refinements using additional data into sub-classes (e.g. ketones vs. alcohols) and, eventually, specific compositions (e.g. ethanol vs. methanol) [1]. We validated this approach using an array of temperature-controlled chemiresistors. We demonstrated that a small set of training analytes is sufficient to allow generalization to novel chemicals and that the scheme provides robust categorization despite aging. Here, we provide further characterization of this approach.

  10. External Standards or Standard Addition? Selecting and Validating a Method of Standardization

    NASA Astrophysics Data System (ADS)

    Harvey, David T.

    2002-05-01

    A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.

  11. Landing A Man Downtown

    ERIC Educational Resources Information Center

    Waters, W. G., II

    1973-01-01

    Analyzes the urban transport problems in comparison with those involved in a journey to the Moon. Indicates that the problem of enabling man to travel through the inner space of conurbations may prove to be more difficult than the transport problem of space travel. (CC)

  12. "But You Look So Good!": Managing Specific Issues

    MedlinePlus

    ... and resources about handling bladder or bowel problems. Self-esteem “I think the most difficult thing to cope ... feel. It’s understandable then that MS can impact self-esteem and confidence. “It’s difficult to feel powerful, competent, ...

  13. Selecting materialized views using random algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi

    2007-04-01

    The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.

  14. How to conduct External Quality Assessment Schemes for the pre-analytical phase?

    PubMed

    Kristensen, Gunn B B; Aakre, Kristin Moberg; Kristoffersen, Ann Helen; Sandberg, Sverre

    2014-01-01

    In laboratory medicine, several studies have described the most frequent errors in the different phases of the total testing process, and a large proportion of these errors occur in the pre-analytical phase. Schemes for registration of errors and subsequent feedback to the participants have been conducted for decades concerning the analytical phase by External Quality Assessment (EQA) organizations operating in most countries. The aim of the paper is to present an overview of different types of EQA schemes for the pre-analytical phase, and give examples of some existing schemes. So far, very few EQA organizations have focused on the pre-analytical phase, and most EQA organizations do not offer pre-analytical EQA schemes (EQAS). It is more difficult to perform and standardize pre-analytical EQAS and also, accreditation bodies do not ask the laboratories for results from such schemes. However, some ongoing EQA programs for the pre-analytical phase do exist, and some examples are given in this paper. The methods used can be divided into three different types; collecting information about pre-analytical laboratory procedures, circulating real samples to collect information about interferences that might affect the measurement procedure, or register actual laboratory errors and relate these to quality indicators. These three types have different focus and different challenges regarding implementation, and a combination of the three is probably necessary to be able to detect and monitor the wide range of errors occurring in the pre-analytical phase.

  15. On the Outer Edges of Protoplanetary Dust Disks

    NASA Astrophysics Data System (ADS)

    Birnstiel, Tilman; Andrews, Sean M.

    2014-01-01

    The expectation that aerodynamic drag will force the solids in a gas-rich protoplanetary disk to spiral in toward the host star on short timescales is one of the fundamental problems in planet formation theory. The nominal efficiency of this radial drift process is in conflict with observations, suggesting that an empirical calibration of solid transport mechanisms in a disk is highly desirable. However, the fact that both radial drift and grain growth produce a similar particle size segregation in a disk (such that larger particles are preferentially concentrated closer to the star) makes it difficult to disentangle a clear signature of drift alone. We highlight a new approach, by showing that radial drift leaves a distinctive "fingerprint" in the dust surface density profile that is directly accessible to current observational facilities. Using an analytical framework for dust evolution, we demonstrate that the combined effects of drift and (viscous) gas drag naturally produce a sharp outer edge in the dust distribution (or, equivalently, a sharp decrease in the dust-to-gas mass ratio). This edge feature forms during the earliest phase in the evolution of disk solids, before grain growth in the outer disk has made much progress, and is preserved over longer timescales when both growth and transport effects are more substantial. The key features of these analytical models are reproduced in detailed numerical simulations, and are qualitatively consistent with recent millimeter-wave observations that find gas/dust size discrepancies and steep declines in dust continuum emission in the outer regions of protoplanetary disks.

  16. Changes in gene expression with sleep.

    PubMed

    Thimgan, Matthew S; Duntley, Stephen P; Shaw, Paul J

    2011-10-15

    There is general agreement within the sleep community and among public health officials of the need for an accessible biomarker of sleepiness. As the foregoing discussions emphasize, however, it may be more difficult to reach consensus on how to define such a biomarker than to identify candidate molecules that can be then evaluated to determine if they might be useful to solve a variety of real-world problems related to insufficient sleep. With that in mind, a goal of our laboratories has been to develop a rational strategy to expedite the identification of candidate biomarkers. 1 We began with the assumption that since both the genetic and environmental context of a gene can influence its behavior, an effective test of sleep loss will likely be composed of a panel of multiple biomarkers. That is, we believe that it is premature to exclude a candidate analyte simply because it might also be modulated in response to other conditions (e.g., illness, metabolism, sympathetic tone, etc.). Our next assumption was that an easily accessible biomarker would be more useful in real-world settings. Thus, we have focused on saliva, as opposed to urine or blood, as a rich source of biological analytes that can be mined to optimize the chances of bringing a biomarker out into the field. Finally, we recognize that conducting validation studies in humans can be expensive and time consuming. Thus, we have exploited genetic and pharmacological tools in the model organism Drosophila melanogaster to more fully characterize the behavior of the most exciting candidate biomarkers.

  17. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  18. Singular value decomposition for the truncated Hilbert transform

    NASA Astrophysics Data System (ADS)

    Katsevich, A.

    2010-11-01

    Starting from a breakthrough result by Gelfand and Graev, inversion of the Hilbert transform became a very important tool for image reconstruction in tomography. In particular, their result is useful when the tomographic data are truncated and one deals with an interior problem. As was established recently, the interior problem admits a stable and unique solution when some a priori information about the object being scanned is available. The most common approach to solving the interior problem is based on converting it to the Hilbert transform and performing analytic continuation. Depending on what type of tomographic data are available, one gets different Hilbert inversion problems. In this paper, we consider two such problems and establish singular value decomposition for the operators involved. We also propose algorithms for performing analytic continuation.

  19. An analytical model of leakage neutron equivalent dose for passively-scattered proton radiotherapy and validation with measurements.

    PubMed

    Schneider, Christopher; Newhauser, Wayne; Farah, Jad

    2015-05-18

    Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose  at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation.

  20. Investigation of learning environment for arithmetic word problems by problem posing as sentence integration in Indonesian language

    NASA Astrophysics Data System (ADS)

    Hasanah, N.; Hayashi, Y.; Hirashima, T.

    2017-02-01

    Arithmetic word problems remain one of the most difficult area of teaching mathematics. Learning by problem posing has been suggested as an effective way to improve students’ understanding. However, the practice in usual classroom is difficult due to extra time needed for assessment and giving feedback to students’ posed problems. To address this issue, we have developed a tablet PC software named Monsakun for learning by posing arithmetic word problems based on Triplet Structure Model. It uses the mechanism of sentence-integration, an efficient implementation of problem-posing that enables agent-assessment of posed problems. The learning environment has been used in actual Japanese elementary school classrooms and the effectiveness has been confirmed in previous researches. In this study, ten Indonesian elementary school students living in Japan participated in a learning session of problem posing using Monsakun in Indonesian language. We analyzed their learning activities and show that students were able to interact with the structure of simple word problem using this learning environment. The results of data analysis and questionnaire suggested that the use of Monsakun provides a way of creating an interactive and fun environment for learning by problem posing for Indonesian elementary school students.

  1. Symbolic computation of the Birkhoff normal form in the problem of stability of the triangular libration points

    NASA Astrophysics Data System (ADS)

    Shevchenko, I. I.

    2008-05-01

    The problem of stability of the triangular libration points in the planar circular restricted three-body problem is considered. A software package, intended for normalization of autonomous Hamiltonian systems by means of computer algebra, is designed so that normalization problems of high analytical complexity could be solved. It is used to obtain the Birkhoff normal form of the Hamiltonian in the given problem. The normalization is carried out up to the 6th order of expansion of the Hamiltonian in the coordinates and momenta. Analytical expressions for the coefficients of the normal form of the 6th order are derived. Though intermediary expressions occupy gigabytes of the computer memory, the obtained coefficients of the normal form are compact enough for presentation in typographic format. The analogue of the Deprit formula for the stability criterion is derived in the 6th order of normalization. The obtained floating-point numerical values for the normal form coefficients and the stability criterion confirm the results by Markeev (1969) and Coppola and Rand (1989), while the obtained analytical and exact numeric expressions confirm the results by Meyer and Schmidt (1986) and Schmidt (1989). The given computational problem is solved without constructing a specialized algebraic processor, i.e., the designed computer algebra package has a broad field of applicability.

  2. Sierra/SolidMechanics 4.48 Verification Tests Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plews, Julia A.; Crane, Nathan K; de Frias, Gabriel Jose

    2018-03-01

    Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less

  3. Sierra/SolidMechanics 4.48 Verification Tests Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel Jose

    Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less

  4. A methodology to enhance electromagnetic compatibility in joint military operations

    NASA Astrophysics Data System (ADS)

    Buckellew, William R.

    The development and validation of an improved methodology to identify, characterize, and prioritize potential joint EMI (electromagnetic interference) interactions and identify and develop solutions to reduce the effects of the interference are discussed. The methodology identifies potential EMI problems using results from field operations, historical data bases, and analytical modeling. Operational expertise, engineering analysis, and testing are used to characterize and prioritize the potential EMI problems. Results can be used to resolve potential EMI during the development and acquisition of new systems and to develop engineering fixes and operational workarounds for systems already employed. The analytic modeling portion of the methodology is a predictive process that uses progressive refinement of the analysis and the operational electronic environment to eliminate noninterfering equipment pairs, defer further analysis on pairs lacking operational significance, and resolve the remaining EMI problems. Tests are conducted on equipment pairs to ensure that the analytical models provide a realistic description of the predicted interference.

  5. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  6. "This Group of Difficult Kids": The Discourse Preservice English Teachers Use to Label Students

    ERIC Educational Resources Information Center

    Salerno, April S.; Kibler, Amanda K.

    2016-01-01

    This study attempts to understand how "achievement gap Discourse" might be present in preservice teachers' (PSTs) Discourse about students they found challenging to teach. Using a Discourse analytic approach, the project considers: How do PSTs describe challenging students in their written reflections? Do PSTs draw on students' multiple…

  7. Stability Criteria for Differential Equations with Variable Time Delays

    ERIC Educational Resources Information Center

    Schley, D.; Shail, R.; Gourley, S. A.

    2002-01-01

    Time delays are an important aspect of mathematical modelling, but often result in highly complicated equations which are difficult to treat analytically. In this paper it is shown how careful application of certain undergraduate tools such as the Method of Steps and the Principle of the Argument can yield significant results. Certain delay…

  8. A View of Rural Schooling through the Eyes of Former Students

    ERIC Educational Resources Information Center

    Pazos, Mercedes Suarez; DePalma, Renee; Membiela, Pedro

    2012-01-01

    Teachers who attended unitary rural schools in northwestern Spain were asked to relate their early school experiences in the form of a personal reflective and analytical narrative. Our analysis of these narratives revealed some strikingly difficult conditions; nevertheless, students tended to relate these hardships with a strong sense of…

  9. Colour mathematics: with graphs and numbers

    NASA Astrophysics Data System (ADS)

    Lo Presto, Michael C.

    2009-07-01

    The different combinations involved in additive and subtractive colour mixing can often be difficult for students to remember. Using transmission graphs for filters of the primary colours and a numerical scheme to write out the relationships are good exercises in analytical thinking that can help students recall the combinations rather than just attempting to memorize them.

  10. Screening Complex Effluents for Estrogenic Activity with the T47D-Kbluc Cell Bioassay: Assay Optimization and Comparison to In Vivo Responses in Fish

    EPA Science Inventory

    The endocrine activity of complex mixtures of chemicals associated with wastewater treatment plant effluents, runoff from concentrated animal feeding operations (CAFOs), and/or other environmental samples can be difficult to characterize based on analytical chemistry. In vitro bi...

  11. SIMPLE ANALYTICAL MODEL FOR HEAT FLOW IN FRACTURES-APPLICATION TO STEAM ENHANCED REMEDIATION CONDUCTED IN FRACTURED ROCK

    EPA Science Inventory

    Remediation of fractured rock sites contaminated by non-aqueous phase liquids has long been recognized as the most difficult undertaking of any site clean-up. Recent pilot studies conducted at the Edwards Air Force Base in California and the former Loring Air Force Base in Maine ...

  12. SIMPLE ANALYTICAL MODEL FOR HEAT FLOW IN FRACTURES - APPLICATION TO STEAM ENHANCED REMEDIATION CONDUCTED IN FRACTURED ROCK

    EPA Science Inventory

    Remediation of fractured rock sites contaminated by non-aqueous phase liquids has long been recognized as the most difficult undertaking of any site clean-up. Recent pilot studies conducted at the Edwards Air Force Base in California and the former Loring Air Force Base in Maine ...

  13. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    PubMed

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. The Efficacy of Using Diagrams When Solving Probability Word Problems in College

    ERIC Educational Resources Information Center

    Beitzel, Brian D.; Staley, Richard K.

    2015-01-01

    Previous experiments have shown a deleterious effect of visual representations on college students' ability to solve total- and joint-probability word problems. The present experiments used conditional-probability problems, known to be more difficult than total- and joint-probability problems. The diagram group was instructed in how to use tree…

  15. The Development, Implementation, and Evaluation of a Problem Solving Heuristic

    ERIC Educational Resources Information Center

    Lorenzo, Mercedes

    2005-01-01

    Problem-solving is one of the main goals in science teaching and is something many students find difficult. This research reports on the development, implementation and evaluation of a problem-solving heuristic. This heuristic intends to help students to understand the steps involved in problem solving (metacognitive tool), and to provide them…

  16. Procedural versus Content-Related Hints for Word Problem Solving: An Exploratory Study

    ERIC Educational Resources Information Center

    Kock, W. D.; Harskamp, E. G.

    2016-01-01

    For primary school students, mathematical word problems are often more difficult to solve than straightforward number problems. Word problems require reading and analysis skills, and in order to explain their situational contexts, the proper mathematical knowledge and number operations have to be selected. To improve students' ability in solving…

  17. The Difficult Patron Situation: A Window of Opportunity To Improve Library Service.

    ERIC Educational Resources Information Center

    Sarkodie-Mensah, Kwasi

    2000-01-01

    Discusses the problem library patron from various fronts: historical, personality traits, importance of complaints, nature and types of problem patrons and their behavior, technology and the newly-bred problem patron, strategies for dealing with problem patrons, and ensuring that library administrators and other supervisors understand the need to…

  18. Multidimensional Functional Behaviour Assessment within a Problem Analysis Framework.

    ERIC Educational Resources Information Center

    Ryba, Ken; Annan, Jean

    This paper presents a new approach to contextualized problem analysis developed for use with multimodal Functional Behaviour Assessment (FBA) at Massey University in Auckland, New Zealand. The aim of problem analysis is to simplify complex problems that are difficult to understand. It accomplishes this by providing a high order framework that can…

  19. Crisis management during anaesthesia: difficult intubation.

    PubMed

    Paix, A D; Williamson, J A; Runciman, W B

    2005-06-01

    Anaesthetists may experience difficulty with intubation unexpectedly which may be associated with difficulty in ventilating the patient. If not well managed, there may be serious consequences for the patient. A simple structured approach to this problem was developed to assist the anaesthetist in this difficult situation. To examine the role of a specific sub-algorithm for the management of difficult intubation. The potential performance of a structured approach developed by review of the literature and analysis of each of the relevant incidents among the first 4000 reported to the Australian Incident Monitoring Study (AIMS) was compared with the actual management as reported by the anaesthetists involved. There were 147 reports of difficult intubation capable of analysis among the first 4000 incidents reported to AIMS. The difficulty was unexpected in 52% of cases; major physiological changes occurred in 37% of these cases. Saturation fell below 90% in 22% of cases, oesophageal intubation was reported in 19%, and an emergency transtracheal airway was required in 4% of cases. Obesity and limited neck mobility and mouth opening were the most common anatomical contributing factors. The data confirm previously reported failures to predict difficult intubation with existing preoperative clinical tests and suggest an ongoing need to teach a pre-learned strategy to deal with difficult intubation and any associated problem with ventilation. An easy-to-follow structured approach to these problems is outlined. It is recommended that skilled assistance be obtained (preferably another anaesthetist) when difficulty is expected or the patient's cardiorespiratory reserve is low. Patients should be assessed postoperatively to exclude any sequelae and to inform them of the difficulties encountered. These should be clearly documented and appropriate steps taken to warn future anaesthetists.

  20. Transactional relations between caregiving stress, executive functioning, and problem behavior from early childhood to early adolescence

    PubMed Central

    LaGasse, Linda L.; Conradt, Elisabeth; Karalunas, Sarah L.; Dansereau, Lynne M.; Butner, Jonathan E.; Shankaran, Seetha; Bada, Henrietta; Bauer, Charles R.; Whitaker, Toni M.; Lester, Barry M.

    2016-01-01

    Developmental psychopathologists face the difficult task of identifying the environmental conditions that may contribute to early childhood behavior problems. Highly stressed caregivers can exacerbate behavior problems, while children with behavior problems may make parenting more difficult and increase caregiver stress. Unknown is: (1) how these transactions originate, (2) whether they persist over time to contribute to the development of problem behavior and (3) what role resilience factors, such as child executive functioning, may play in mitigating the development of problem behavior. In the present study, transactional relations between caregiving stress, executive functioning, and behavior problems were examined in a sample of 1,388 children with prenatal drug exposures at three developmental time points: early childhood (birth-age 5), middle childhood (ages 6 to 9), and early adolescence (ages 10 to 13). Transactional relations differed between caregiving stress and internalizing versus externalizing behavior. Targeting executive functioning in evidence-based interventions for children with prenatal substance exposure who present with internalizing problems and treating caregiving psychopathology, depression, and parenting stress in early childhood may be particularly important for children presenting with internalizing behavior. PMID:27427803

  1. A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo

    1996-01-01

    A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.

  2. The stories they tell: How third year medical students portray patients, family members, physicians, and themselves in difficult encounters.

    PubMed

    Shapiro, Johanna; Rakhra, Pavandeep; Wong, Adrianne

    2016-10-01

    Physicians have long had patients whom they have labeled "difficult", but little is known about how medical students perceive difficult encounters with patients. In this study, we analyzed 134 third year medical students' reflective essays written over an 18-month period about difficult student-patient encounters. We used a qualitative computerized software program, Atlas.ti to analyze students' observations and reflections. Main findings include that students described patients who were angry and upset; noncompliant with treatment plans; discussed "nonmedical" problems; fearful, worried, withdrawn, or "disinterested" in their health. Students often described themselves as anxious, uncertain, confused, and frustrated. Nevertheless, they saw themselves behaving in empathic and patient-centered ways while also taking refuge in "standard" behaviors not necessarily appropriate to the circumstances. Students rarely mentioned receiving guidance from attendings regarding how to manage these challenging interactions. These third-year medical students recognized the importance of behaving empathically in difficult situations and often did so. However, they often felt overwhelmed and frustrated, resorting to more reductive behaviors that did not match the needs of the patient. Students need more guidance from attending physicians in order to approach difficult interactions with specific problem-solving skills while maintaining an empathic, patient-centered context.

  3. Using functional neuroimaging combined with a think-aloud protocol to explore clinical reasoning expertise in internal medicine.

    PubMed

    Durning, Steven J; Graner, John; Artino, Anthony R; Pangaro, Louis N; Beckman, Thomas; Holmboe, Eric; Oakes, Terrance; Roy, Michael; Riedy, Gerard; Capaldi, Vincent; Walter, Robert; van der Vleuten, Cees; Schuwirth, Lambert

    2012-09-01

    Clinical reasoning is essential to medical practice, but because it entails internal mental processes, it is difficult to assess. Functional magnetic resonance imaging (fMRI) and think-aloud protocols may improve understanding of clinical reasoning as these methods can more directly assess these processes. The objective of our study was to use a combination of fMRI and think-aloud procedures to examine fMRI correlates of a leading theoretical model in clinical reasoning based on experimental findings to date: analytic (i.e., actively comparing and contrasting diagnostic entities) and nonanalytic (i.e., pattern recognition) reasoning. We hypothesized that there would be functional neuroimaging differences between analytic and nonanalytic reasoning theory. 17 board-certified experts in internal medicine answered and reflected on validated U.S. Medical Licensing Exam and American Board of Internal Medicine multiple-choice questions (easy and difficult) during an fMRI scan. This procedure was followed by completion of a formal think-aloud procedure. fMRI findings provide some support for the presence of analytic and nonanalytic reasoning systems. Statistically significant activation of prefrontal cortex distinguished answering incorrectly versus correctly (p < 0.01), whereas activation of precuneus and midtemporal gyrus distinguished not guessing from guessing (p < 0.01). We found limited fMRI evidence to support analytic and nonanalytic reasoning theory, as our results indicate functional differences with correct vs. incorrect answers and guessing vs. not guessing. However, our findings did not suggest one consistent fMRI activation pattern of internal medicine expertise. This model of employing fMRI correlates offers opportunities to enhance our understanding of theory, as well as improve our teaching and assessment of clinical reasoning, a key outcome of medical education.

  4. Multimedia Analysis plus Visual Analytics = Multimedia Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinchor, Nancy; Thomas, James J.; Wong, Pak C.

    2010-10-01

    Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.

  5. Transboundary environmental assessment: lessons from OTAG. The Ozone Transport Assessment Group.

    PubMed

    Farrell, Alexander E; Keating, Terry J

    2002-06-15

    The nature and role of assessments in creating policy for transboundary environmental problems is discussed. Transboundary environmental problems are particularly difficult to deal with because they typically require cooperation among independent political jurisdictions (e.g., states or nations) which face differing costs and benefits and which often have different technical capabilities and different interests. In particular, transboundary pollution issues generally involve the problem of an upstream source and a downstream receptor on opposite sides of a relevant political boundary, making it difficult for the jurisdiction containing the receptor to obtain relief from the pollution problem. The Ozone Transport Assessment Group (OTAG) addressed such a transboundary problem: the long-range transport of tropospheric ozone (i.e., photochemical smog) across the eastern United States. The evolution of the science and policy that led to OTAG, the OTAG process, and its outcomes are presented. Lessons that are available to be learned from the OTAG experience, particularly for addressing similar transboundary problems such as regional haze, are discussed.

  6. SYNTHESIS REPORT ON FIVE DENSE, NONAQUEOUS-PHASE LIQUID (DNAPL) REMEDIATION PROJECTS

    EPA Science Inventory

    Dense non-aqueous phase liquid (DNAPL) poses a difficult problem for subsurface remediation because it serves as a continuing source to dissolved phase ground water contamination and is difficult to remove from interstitial pore space or bedrock fractures in the subsurface. Numer...

  7. REVIEWS OF TOPICAL PROBLEMS: Analytic calculations on digital computers for applications in physics and mathematics

    NASA Astrophysics Data System (ADS)

    Gerdt, V. P.; Tarasov, O. V.; Shirkov, Dmitrii V.

    1980-01-01

    The present state of analytic calculations on computers is reviewed. Several programming systems which are used for analytic calculations are discussed: SCHOONSCHIP, CLAM, REDUCE-2, SYMBAL, CAMAL, AVTO-ANALITIK, MACSYMA, etc. It is shown that these systems can be used to solve a wide range of problems in physics and mathematics. Some physical applications are discussed in celestial mechanics, the general theory of relativity, quantum field theory, plasma physics, hydrodynamics, atomic and molecular physics, and quantum chemistry. Some mathematical applications which are discussed are evaluating indefinite integrals, solving differential equations, and analyzing mathematical expressions. This review is addressed to physicists and mathematicians working in a wide range of fields.

  8. Hypergeometric Series Solution to a Class of Second-Order Boundary Value Problems via Laplace Transform with Applications to Nanofluids

    NASA Astrophysics Data System (ADS)

    Ebaid, Abdelhalim; Wazwaz, Abdul-Majid; Alali, Elham; Masaedeh, Basem S.

    2017-03-01

    Very recently, it was observed that the temperature of nanofluids is finally governed by second-order ordinary differential equations with variable coefficients of exponential orders. Such coefficients were then transformed to polynomials type by using new independent variables. In this paper, a class of second-order ordinary differential equations with variable coefficients of polynomials type has been solved analytically. The analytical solution is expressed in terms of a hypergeometric function with generalized parameters. Moreover, applications of the present results have been applied on some selected nanofluids problems in the literature. The exact solutions in the literature were derived as special cases of our generalized analytical solution.

  9. Andrei Andreevich Bolibrukh's works on the analytic theory of differential equations

    NASA Astrophysics Data System (ADS)

    Anosov, Dmitry V.; Leksin, Vladimir P.

    2011-02-01

    This paper contains an account of A.A. Bolibrukh's results obtained in the new directions of research that arose in the analytic theory of differential equations as a consequence of his sensational counterexample to the Riemann-Hilbert problem. A survey of results of his students in developing topics first considered by Bolibrukh is also presented. The main focus is on the role of the reducibility/irreducibility of systems of linear differential equations and their monodromy representations. A brief synopsis of results on the multidimensional Riemann-Hilbert problem and on isomonodromic deformations of Fuchsian systems is presented, and the main methods in the modern analytic theory of differential equations are sketched. Bibliography: 69 titles.

  10. Analysis and gyrokinetic simulation of MHD Alfven wave interactions

    NASA Astrophysics Data System (ADS)

    Nielson, Kevin Derek

    The study of low-frequency turbulence in magnetized plasmas is a difficult problem due to both the enormous range of scales involved and the variety of physics encompassed over this range. Much of the progress that has been made in turbulence theory is based upon a result from incompressible magnetohydrodynamics (MHD), in which energy is only transferred from large scales to small via the collision of Alfven waves propagating oppositely along the mean magnetic field. Improvements in laboratory devices and satellite measurements have demonstrated that, while theories based on this premise are useful over inertial ranges, describing turbulence at scales that approach particle gyroscales requires new theory. In this thesis, we examine the limits of incompressible MHD theory in describing collisions between pairs of Alfven waves. This interaction represents the fundamental unit of plasma turbulence. To study this interaction, we develop an analytic theory describing the nonlinear evolution of interacting Alfven waves and compare this theory to simulations performed using the gyrokinetic code AstroGK. Gyrokinetics captures a much richer set of physics than that described by incompressible MHD, and is well-suited to describing Alfvenic turbulence around the ion gyroscale. We demonstrate that AstroGK is well suited to the study of physical Alfven waves by reproducing laboratory Alfven dispersion data collected using the LAPD. Additionally, we have developed an initialization alogrithm for use with AstroGK that allows exact Alfven eigenmodes to be initialized with user specified amplitudes and phases. We demonstrate that our analytic theory based upon incompressible MHD gives excellent agreement with gyrokinetic simulations for weakly turbulent collisions in the limit that k⊥rho i << 1. In this limit, agreement is observed in the time evolution of nonlinear products, and in the strength of nonlinear interaction with respect to polarization and scale. We also examine the effect of wave amplitude upon the validity of our analytic solution, exploring the nature of strong turbulence. In the kinetic limit where k⊥ rhoi ≳ 1 where incompressible MHD is no longer a valid description, we illustrate how the nonlinear evolution departs from our analytic expression. The analytic theory we develop provides a framework from which more sophisticated of weak and strong inertial-range turbulence theories may be developed. Characterization of the limits of this theory may provide guidance in the development of kinetic Alfven wave turbulence.

  11. Analytical Cost Metrics : Days of Future Past

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, Nirmal; Rajopadhye, Sanjay; Djidjev, Hristo Nikolov

    As we move towards the exascale era, the new architectures must be capable of running the massive computational problems efficiently. Scientists and researchers are continuously investing in tuning the performance of extreme-scale computational problems. These problems arise in almost all areas of computing, ranging from big data analytics, artificial intelligence, search, machine learning, virtual/augmented reality, computer vision, image/signal processing to computational science and bioinformatics. With Moore’s law driving the evolution of hardware platforms towards exascale, the dominant performance metric (time efficiency) has now expanded to also incorporate power/energy efficiency. Therefore the major challenge that we face in computing systems researchmore » is: “how to solve massive-scale computational problems in the most time/power/energy efficient manner?”« less

  12. Application of the boundary integral method to immiscible displacement problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masukawa, J.; Horne, R.N.

    1988-08-01

    This paper presents an application of the boundary integral method (BIM) to fluid displacement problems to demonstrate its usefulness in reservoir simulation. A method for solving two-dimensional (2D), piston-like displacement for incompressible fluids with good accuracy has been developed. Several typical example problems with repeated five-spot patterns were solved for various mobility ratios. The solutions were compared with the analytical solutions to demonstrate accuracy. Singularity programming was found to be a major advantage in handling flow in the vicinity of wells. The BIM was found to be an excellent way to solve immiscible displacement problems. Unlike analytic methods, it canmore » accommodate complex boundary shapes and does not suffer from numerical dispersion at the front.« less

  13. Applications of the Analytical Electron Microscope to Materials Science

    NASA Technical Reports Server (NTRS)

    Goldstein, J. I.

    1992-01-01

    In the last 20 years, the analytical electron microscope (AEM) as allowed investigators to obtain chemical and structural information from less than 50 nanometer diameter regions in thin samples of materials and to explore problems where reactions occur at boundaries and interfaces or within small particles or phases in bulk samples. Examples of the application of the AEM to materials science problems are presented in this paper and demonstrate the usefulness and the future potential of this instrument.

  14. An Extension of Holographic Moiré to Micromechanics

    NASA Astrophysics Data System (ADS)

    Sciammarella, C. A.; Sciammarella, F. M.

    The electronic Holographic Moiré is an ideal tool for micromechanics studies. It does not require a modification of the surface by the introduction of a reference grating. This is of particular advantage when dealing with materials such as solid propellant grains whose chemical nature and surface finish makes the application of a reference grating very difficult. Traditional electronic Holographic Moiré presents some difficult problems when large magnifications are needed and large rigid body motion takes place. This paper presents developments that solves these problems and extends the application of the technique to micromechanics.

  15. Expedited Selection of NMR Chiral Solvating Agents for Determination of Enantiopurity

    PubMed Central

    2016-01-01

    The use of NMR chiral solvating agents (CSAs) for the analysis of enantiopurity has been known for decades, but has been supplanted in recent years by chromatographic enantioseparation technology. While chromatographic methods for the analysis of enantiopurity are now commonplace and easy to implement, there are still individual compounds and entire classes of analytes where enantioseparation can prove extremely difficult, notably, compounds that are chiral by virtue of very subtle differences such as isotopic substitution or small differences in alkyl chain length. NMR analysis using CSAs can often be useful for such problems, but the traditional approach to selection of an appropriate CSA and the development of an NMR-based analysis method often involves a trial-and-error approach that can be relatively slow and tedious. In this study we describe a high-throughput experimentation approach to the selection of NMR CSAs that employs automation-enabled screening of prepared libraries of CSAs in a systematic fashion. This approach affords excellent results for a standard set of enantioenriched compounds, providing a valuable comparative data set for the effectiveness of CSAs for different classes of compounds. In addition, the technique has been successfully applied to challenging pharmaceutical development problems that are not amenable to chromatographic solutions. Overall, this methodology provides a rapid and powerful approach for investigating enantiopurity that compliments and augments conventional chromatographic approaches. PMID:27280168

  16. Measuring nanoparticles size distribution in food and consumer products: a review.

    PubMed

    Calzolai, L; Gilliland, D; Rossi, F

    2012-08-01

    Nanoparticles are already used in several consumer products including food, food packaging and cosmetics, and their detection and measurement in food represent a particularly difficult challenge. In order to fill the void in the official definition of what constitutes a nanomaterial, the European Commission published in October 2011 its recommendation on the definition of 'nanomaterial'. This will have an impact in many different areas of legislation, such as the European Cosmetic Products Regulation, where the current definitions of nanomaterial will come under discussion regarding how they should be adapted in light of this new definition. This new definition calls for the measurement of the number-based particle size distribution in the 1-100 nm size range of all the primary particles present in the sample independently of whether they are in a free, unbound state or as part of an aggregate/agglomerate. This definition does present great technical challenges for those who must develop valid and compatible measuring methods. This review will give an overview of the current state of the art, focusing particularly on the suitability of the most used techniques for the size measurement of nanoparticles when addressing this new definition of nanomaterials. The problems to be overcome in measuring nanoparticles in food and consumer products will be illustrated with some practical examples. Finally, a possible way forward (based on the combination of different measuring techniques) for solving this challenging analytical problem is illustrated.

  17. More Analytical Tools for Fluids Management in Space

    NASA Astrophysics Data System (ADS)

    Weislogel, Mark

    Continued advances during the 2000-2010 decade in the analysis of a class of capillary-driven flows relevant to materials processing and fluids management aboard spacecraft have been made. The class of flows addressed concern combined forced and spontaneous capillary flows in complex containers with interior edges. Such flows are commonplace in space-based fluid systems and arise from the particular container geometry and wetting properties of the system. Important applications for this work include low-g liquid fill and/or purge operations and passive fluid phase separation operations, where the container (i.e. fuel tank, water processer, etc.) geometry possesses interior edges, and where quantitative information of fluid location, transients, flow rates, and stability is critical. Examples include the storage and handling of liquid propellants and cryogens, water conditioning for life support, fluid phase-change thermal systems, materials processing in the liquid state, on-orbit biofluids processing, among others. For a growing number of important problems, closed-form expressions to transient three-dimensional flows are possible that, as design tools, replace difficult, time-consuming, and rarely performed numerical calculations. An overview of a selection of solutions in-hand is presented with example problems solved. NASA drop tower, low-g aircraft, and ISS flight ex-periment results are employed where practical to buttress the theoretical findings. The current review builds on a similar review presented at COSPAR, 2002, for the approximate decade 1990-2000.

  18. Effects of intrinsic stochasticity on delayed reaction-diffusion patterning systems.

    PubMed

    Woolley, Thomas E; Baker, Ruth E; Gaffney, Eamonn A; Maini, Philip K; Seirin-Lee, Sungrim

    2012-05-01

    Cellular gene expression is a complex process involving many steps, including the transcription of DNA and translation of mRNA; hence the synthesis of proteins requires a considerable amount of time, from ten minutes to several hours. Since diffusion-driven instability has been observed to be sensitive to perturbations in kinetic delays, the application of Turing patterning mechanisms to the problem of producing spatially heterogeneous differential gene expression has been questioned. In deterministic systems a small delay in the reactions can cause a large increase in the time it takes a system to pattern. Recently, it has been observed that in undelayed systems intrinsic stochasticity can cause pattern initiation to occur earlier than in the analogous deterministic simulations. Here we are interested in adding both stochasticity and delays to Turing systems in order to assess whether stochasticity can reduce the patterning time scale in delayed Turing systems. As analytical insights to this problem are difficult to attain and often limited in their use, we focus on stochastically simulating delayed systems. We consider four different Turing systems and two different forms of delay. Our results are mixed and lead to the conclusion that, although the sensitivity to delays in the Turing mechanism is not completely removed by the addition of intrinsic noise, the effects of the delays are clearly ameliorated in certain specific cases.

  19. Well-posedness, linear perturbations, and mass conservation for the axisymmetric Einstein equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dain, Sergio; Ortiz, Omar E.; Facultad de Matematica, Astronomia y Fisica, FaMAF, Universidad Nacional de Cordoba, Instituto de Fisica Enrique Gaviola, IFEG, CONICET, Ciudad Universitaria

    2010-02-15

    For axially symmetric solutions of Einstein equations there exists a gauge which has the remarkable property that the total mass can be written as a conserved, positive definite, integral on the spacelike slices. The mass integral provides a nonlinear control of the variables along the whole evolution. In this gauge, Einstein equations reduce to a coupled hyperbolic-elliptic system which is formally singular at the axis. As a first step in analyzing this system of equations we study linear perturbations on a flat background. We prove that the linear equations reduce to a very simple system of equations which provide, thoughmore » the mass formula, useful insight into the structure of the full system. However, the singular behavior of the coefficients at the axis makes the study of this linear system difficult from the analytical point of view. In order to understand the behavior of the solutions, we study the numerical evolution of them. We provide strong numerical evidence that the system is well-posed and that its solutions have the expected behavior. Finally, this linear system allows us to formulate a model problem which is physically interesting in itself, since it is connected with the linear stability of black hole solutions in axial symmetry. This model can contribute significantly to solve the nonlinear problem and at the same time it appears to be tractable.« less

  20. Metal-Organic Framework Modified Glass Substrate for Analysis of Highly Volatile Chemical Warfare Agents by Paper Spray Mass Spectrometry.

    PubMed

    Dhummakupt, Elizabeth S; Carmany, Daniel O; Mach, Phillip M; Tovar, Trenton M; Ploskonka, Ann M; Demond, Paul S; DeCoste, Jared B; Glaros, Trevor

    2018-03-07

    Paper spray mass spectrometry has been shown to successfully analyze chemical warfare agent (CWA) simulants. However, due to the volatility differences between the simulants and real G-series (i.e., sarin, soman) CWAs, analysis from an untreated paper substrate proved difficult. To extend the analytical lifetime of these G-agents, metal-organic frameworks (MOFs) were successfully integrated onto the paper spray substrates to increase adsorption and desorption. In this study, several MOFs and nanoparticles were tested to extend the analytical lifetimes of sarin, soman, and cyclosarin on paper spray substrates. It was found that the addition of either UiO-66 or HKUST-1 to the paper substrate increased the analytical lifetime of the G-agents from less than 5 min detectability to at least 50 min.

  1. Validation of an Active Gear, Flexible Aircraft Take-off and Landing analysis (AGFATL)

    NASA Technical Reports Server (NTRS)

    Mcgehee, J. R.

    1984-01-01

    The results of an analytical investigation using a computer program for active gear, flexible aircraft take off and landing analysis (AGFATL) are compared with experimental data from shaker tests, drop tests, and simulated landing tests to validate the AGFATL computer program. Comparison of experimental and analytical responses for both passive and active gears indicates good agreement for shaker tests and drop tests. For the simulated landing tests, the passive and active gears were influenced by large strut binding friction forces. The inclusion of these undefined forces in the analytical simulations was difficult, and consequently only fair to good agreement was obtained. An assessment of the results from the investigation indicates that the AGFATL computer program is a valid tool for the study and initial design of series hydraulic active control landing gear systems.

  2. Current Status of Mycotoxin Analysis: A Critical Review.

    PubMed

    Shephard, Gordon S

    2016-07-01

    It is over 50 years since the discovery of aflatoxins focused the attention of food safety specialists on fungal toxins in the feed and food supply. Since then, analysis of this important group of natural contaminants has advanced in parallel with general developments in analytical science, and current MS methods are capable of simultaneously analyzing hundreds of compounds, including mycotoxins, pesticides, and drugs. This profusion of data may advance our understanding of human exposure, yet constitutes an interpretive challenge to toxicologists and food safety regulators. Despite these advances in analytical science, the basic problem of the extreme heterogeneity of mycotoxin contamination, although now well understood, cannot be circumvented. The real health challenges posed by mycotoxin exposure occur in the developing world, especially among small-scale and subsistence farmers. Addressing these problems requires innovative approaches in which analytical science must also play a role in providing suitable out-of-laboratory analytical techniques.

  3. Auditory Processing Disorder (For Parents)

    MedlinePlus

    ... or other speech-language difficulties? Are verbal (word) math problems difficult for your child? Is your child ... inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  4. Metacognition: Student Reflections on Problem Solving

    ERIC Educational Resources Information Center

    Wismath, Shelly; Orr, Doug; Good, Brandon

    2014-01-01

    Twenty-first century teaching and learning focus on the fundamental skills of critical thinking and problem solving, creativity and innovation, and collaboration and communication. Metacognition is a crucial aspect of both problem solving and critical thinking, but it is often difficult to get students to engage in authentic metacognitive…

  5. Beyond Utility Targeting: Toward Axiological Air Operations

    DTIC Science & Technology

    2000-01-01

    encounter the leader- sociopath , bereft of values, quite willing to live underground in hiding and in- sensitive to the absence of human comforts...that is a mere one thousand value-analysis problems to begin solving. A more difficult problem to solve is the problem of the leader- sociopath

  6. Modeling of heat flow and effective thermal conductivity of fractured media: Analytical and numerical methods

    NASA Astrophysics Data System (ADS)

    Nguyen, S. T.; Vu, M.-H.; Vu, M. N.; Tang, A. M.

    2017-05-01

    The present work aims to modeling the thermal conductivity of fractured materials using homogenization-based analytical and pattern-based numerical methods. These materials are considered as a network of cracks distributed inside a solid matrix. Heat flow through such media is perturbed by the crack system. The problem of heat flow across a single crack is firstly investigated. The classical Eshelby's solution, extended to the thermal conduction problem of an ellipsoidal inclusion embedding in an infinite homogeneous matrix, gives an analytical solution of temperature discontinuity across a non-conducting penny-shaped crack. This solution is then validated by the numerical simulation based on the finite elements method. The numerical simulation allows analyzing the effect of crack conductivity. The problem of a single crack is then extended to a medium containing multiple cracks. Analytical estimations for effective thermal conductivity, that take into account the interaction between cracks and their spatial distribution, are developed for the case of non-conducting cracks. Pattern-based numerical method is then employed for both cases non-conducting and conducting cracks. In the case of non-conducting cracks, numerical and analytical methods, both account for the spatial distribution of the cracks, fit perfectly. In the case of conducting cracks, the numerical analyzing of crack conductivity effect shows that highly conducting cracks weakly affect heat flow and the effective thermal conductivity of fractured media.

  7. Early developmental characteristics and features of major depressive disorder among child psychiatric patients in Hungary.

    PubMed

    Kapornai, Krisztina; Gentzler, Amy L; Tepper, Ping; Kiss, Eniko; Mayer, László; Tamás, Zsuzsanna; Kovacs, Maria; Vetró, Agnes

    2007-06-01

    We investigate the relations of early atypical characteristics (perinatal problems, developmental delay, and difficult temperament) and onset-age (as well as severity of) first major depressive disorder (MDD) and first internalizing disorder in a clinical sample of depressed children in Hungary. Participants were 371 children (ages 7-14) with MDD, and their biological mothers, recruited through multiple clinical sites. Diagnoses (via DSM-IV criteria) and onset dates of disorders were finalized "best estimate" psychiatrists, and based on multiple information sources. Mothers provided developmental data in a structured interview. Difficult temperament predicted earlier onset of MDD and first internalizing disorder, but its effect was ameliorated if the family was intact during early childhood. Further, the importance of difficult temperament decreased as a function of time. Perinatal problems and developmental delay did not impact onset ages of disorders, and none of the early childhood characteristics associated with MDD episode severity. Children with MDD may have added disadvantage of earlier onset if they had a difficult temperament in infancy. Because early temperament mirrors physiological reactivity and regulatory capacity, it can affect various areas of functioning related to psychopathology. Early caregiver stability may attenuate some adverse effects of difficult infant temperament.

  8. Using Predictor-Corrector Methods in Numerical Solutions to Mathematical Problems of Motion

    ERIC Educational Resources Information Center

    Lewis, Jerome

    2005-01-01

    In this paper, the author looks at some classic problems in mathematics that involve motion in the plane. Many case problems like these are difficult and beyond the mathematical skills of most undergraduates, but computational approaches often require less insight into the subtleties of the problems and can be used to obtain reliable solutions.…

  9. The Identification and Significance of Intuitive and Analytic Problem Solving Approaches Among College Physics Students

    ERIC Educational Resources Information Center

    Thorsland, Martin N.; Novak, Joseph D.

    1974-01-01

    Described is an approach to assessment of intuitive and analytic modes of thinking in physics. These modes of thinking are associated with Ausubel's theory of learning. High ability in either intuitive or analytic thinking was associated with success in college physics, with high learning efficiency following a pattern expected on the basis of…

  10. The problem of self-disclosure in psychoanalysis.

    PubMed

    Meissner, W W

    2002-01-01

    The problem of self-disclosure is explored in relation to currently shifting paradigms of the nature of the analytic relation and analytic interaction. Relational and intersubjective perspectives emphasize the role of self-disclosure as not merely allowable, but as an essential facilitating aspect of the analytic dialogue, in keeping with the role of the analyst as a contributing partner in the process. At the opposite extreme, advocates of classical anonymity stress the importance of neutrality and abstinence. The paper seeks to chart a course between unconstrained self-disclosure and absolute anonymity, both of which foster misalliances. Self-disclosure is seen as at times contributory to the analytic process, and at times deleterious. The decision whether to self-disclose, what to disclose, and when and how, should be guided by the analyst's perspective on neutrality, conceived as a mental stance in which the analyst assesses and decides what, at any given point, seems to contribute to the analytic process and the patient's therapeutic benefit. The major risk in self-disclosure is the tendency to draw the analytic interaction into the real relation between analyst and patient, thus diminishing or distorting the therapeutic alliance, mitigating transference expression, and compromising therapeutic effectiveness.

  11. An Open-source Community Web Site To Support Ground-Water Model Testing

    NASA Astrophysics Data System (ADS)

    Kraemer, S. R.; Bakker, M.; Craig, J. R.

    2007-12-01

    A community wiki wiki web site has been created as a resource to support ground-water model development and testing. The Groundwater Gourmet wiki is a repository for user supplied analytical and numerical recipes, howtos, and examples. Members are encouraged to submit analytical solutions, including source code and documentation. A diversity of code snippets are sought in a variety of languages, including Fortran, C, C++, Matlab, Python. In the spirit of a wiki, all contributions may be edited and altered by other users, and open source licensing is promoted. Community accepted contributions are graduated into the library of analytic solutions and organized into either a Strack (Groundwater Mechanics, 1989) or Bruggeman (Analytical Solutions of Geohydrological Problems, 1999) classification. The examples section of the wiki are meant to include laboratory experiments (e.g., Hele Shaw), classical benchmark problems (e.g., Henry Problem), and controlled field experiments (e.g., Borden landfill and Cape Cod tracer tests). Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy. Mention of trade names or commercial products does not constitute endorsement or recommendation for use.

  12. Analysis of random point images with the use of symbolic computation codes and generalized Catalan numbers

    NASA Astrophysics Data System (ADS)

    Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.

    2016-11-01

    Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.

  13. The identification of van Hiele level students on the topic of space analytic geometry

    NASA Astrophysics Data System (ADS)

    Yudianto, E.; Sunardi; Sugiarti, T.; Susanto; Suharto; Trapsilasiwi, D.

    2018-03-01

    Geometry topics are still considered difficult by most students. Therefore, this study focused on the identification of students related to van Hiele levels. The task used from result of the development of questions related to analytical geometry of space. The results of the work involving 78 students who worked on these questions covered 11.54% (nine students) classified on a visual level; 5.13% (four students) on analysis level; 1.28% (one student) on informal deduction level; 2.56% (two students) on deduction and 2.56% (two students) on rigor level, and 76.93% (sixty students) classified on the pre-visualization level.

  14. Transport of photons produced by lightning in clouds

    NASA Technical Reports Server (NTRS)

    Solakiewicz, Richard

    1991-01-01

    The optical effects of the light produced by lightning are of interest to atmospheric scientists for a number of reasons. Two techniques are mentioned which are used to explain the nature of these effects: Monte Carlo simulation; and an equivalent medium approach. In the Monte Carlo approach, paths of individual photons are simulated; a photon is said to be scattered if it escapes the cloud, otherwise it is absorbed. In the equivalent medium approach, the cloud is replaced by a single obstacle whose properties are specified by bulk parameters obtained by methods due to Twersky. Herein, Boltzmann transport theory is used to obtain photon intensities. The photons are treated like a Lorentz gas. Only elastic scattering is considered and gravitational effects are neglected. Water droplets comprising a cuboidal cloud are assumed to be spherical and homogeneous. Furthermore, it is assumed that the distribution of droplets in the cloud is uniform and that scattering by air molecules is neglible. The time dependence and five dimensional nature of this problem make it particularly difficult; neither analytic nor numerical solutions are known.

  15. Use of the AHP methodology in system dynamics: Modelling and simulation for health technology assessments to determine the correct prosthesis choice for hernia diseases.

    PubMed

    Improta, Giovanni; Russo, Mario Alessandro; Triassi, Maria; Converso, Giuseppe; Murino, Teresa; Santillo, Liberatina Carmela

    2018-05-01

    Health technology assessments (HTAs) are often difficult to conduct because of the decisive procedures of the HTA algorithm, which are often complex and not easy to apply. Thus, their use is not always convenient or possible for the assessment of technical requests requiring a multidisciplinary approach. This paper aims to address this issue through a multi-criteria analysis focusing on the analytic hierarchy process (AHP). This methodology allows the decision maker to analyse and evaluate different alternatives and monitor their impact on different actors during the decision-making process. However, the multi-criteria analysis is implemented through a simulation model to overcome the limitations of the AHP methodology. Simulations help decision-makers to make an appropriate decision and avoid unnecessary and costly attempts. Finally, a decision problem regarding the evaluation of two health technologies, namely, the evaluation of two biological prostheses for incisional infected hernias, will be analysed to assess the effectiveness of the model. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Plane Smoothers for Multiblock Grids: Computational Aspects

    NASA Technical Reports Server (NTRS)

    Llorente, Ignacio M.; Diskin, Boris; Melson, N. Duane

    1999-01-01

    Standard multigrid methods are not well suited for problems with anisotropic discrete operators, which can occur, for example, on grids that are stretched in order to resolve a boundary layer. One of the most efficient approaches to yield robust methods is the combination of standard coarsening with alternating-direction plane relaxation in the three dimensions. However, this approach may be difficult to implement in codes with multiblock structured grids because there may be no natural definition of global lines or planes. This inherent obstacle limits the range of an implicit smoother to only the portion of the computational domain in the current block. This report studies in detail, both numerically and analytically, the behavior of blockwise plane smoothers in order to provide guidance to engineers who use block-structured grids. The results obtained so far show alternating-direction plane smoothers to be very robust, even on multiblock grids. In common computational fluid dynamics multiblock simulations, where the number of subdomains crossed by the line of a strong anisotropy is low (up to four), textbook multigrid convergence rates can be obtained with a small overlap of cells between neighboring blocks.

  17. Developing Formal Correctness Properties from Natural Language Requirements

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.

    2006-01-01

    This viewgraph presentation reviews the rationale of the program to transform natural language specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from natural language temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in natural language, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.

  18. Identification of carbohydrates by matrix-free material-enhanced laser desorption/ionisation mass spectrometry.

    PubMed

    Hashir, Muhammad Ahsan; Stecher, Guenther; Bakry, Rania; Kasemsook, Saowapak; Blassnig, Bernhard; Feuerstein, Isabel; Abel, Gudrun; Popp, Michael; Bobleter, Ortwin; Bonn, Guenther K

    2007-01-01

    Matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOF-MS) is a sensitive mass spectrometric technique which utilises acidic materials as matrices for laser energy absorption, desorption and ionisation of analytes. These matrix materials produce background signals particularly in the low-mass range and make the detection and identification of small molecules difficult and nearly impossible. To overcome this problem this paper introduces matrix-free material-enhanced laser desorption/ionisation mass spectrometry (mf-MELDI-MS) for the screening and analysis of small molecules such as carbohydrates. For this purpose, 4,4'-azo-dianiline was immobilised on silica gel enabling the absorption of laser energy sufficient for successful desorption and ionisation of low molecular weight compounds. The particle and pore sizes, the solvent system for suspension and the sample preparation procedures have been optimised. The newly synthesised MELDI material delivered excellent spectra with regard to signal-to-noise ratio and detection sensitivity. Finally, wheat straw degradation products and Salix alba L. plant extracts were analysed proving the high performance and excellent behaviour of the introduced material. Copyright (c) 2007 John Wiley & Sons, Ltd.

  19. Design and test of a prototype thermal bus evaporator reservoir aboard the KC-135 0-g aircraft

    NASA Technical Reports Server (NTRS)

    Brown, Richard F.; Gustafson, Eric; Long, W. Russ

    1987-01-01

    The Thermal Bus Zero-G Reservoir Demonstration Experiment (RDE) has currently undergone two flights on the NASA-JSC KC-135 Reduced Gravity Aircraft. The objective of the experiment, which uses a smaller version of the evaporator reservoirs being designed for the Prototype Thermal Bus for Space Station, is to demonstrate proper 0-g operation of the reservoir in terms of fluid positioning, draining, and filling. The KC-135 was chosen to provide a cost-effective and timely evaluation of 0-g design issues that would be difficult to predict analytically. A total of fifty 0-g parabolas have been flown, each providing approximately 25-30 seconds of 0-g time. While problems have been encountered, the experiment has provided valuable design data on the 0-g operation of the reservoir. This paper documents the design of the experiment; the results of both flights, based on the high-speed movies taken during the flight and the visual observations of the experimenters; and the design modifications made as a result of the first flight and planned as a result of the second flight.

  20. Pareto-Optimal Multi-objective Inversion of Geophysical Data

    NASA Astrophysics Data System (ADS)

    Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham

    2018-01-01

    In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.

  1. Generalized activity equations for spiking neural network dynamics.

    PubMed

    Buice, Michael A; Chow, Carson C

    2013-01-01

    Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales-the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.

  2. Is electrospray emission really due to columbic forces?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aliotta, Francesco, E-mail: aliotta@ipcf.cnr.it; Ponterio, Rosina C.; Salvato, Gabriele

    2014-09-15

    Electrospray ionization (ESI) is a widely adopted soft ionization method for mass spectroscopy (MS). In spite of the undeniable success of the technique, its mechanisms are difficult to be analytically modelled because the process is characterized by non-equilibrium conditions. The common belief is that the formation of gas-phase ions takes place at the apex of the Taylor cone via electrophoretic charging. The charge balance implies that a conversion of electrons to ions should occur at the metal-liquid interface of the injector needle. We have detected that the above description is based on unproved assumptions which are not consistent with themore » correct evaluation of the problem. The comparison between experiments performed under the usual geometry and observations obtained under symmetric field configurations suggests that the emitted droplets cannot be significantly charged or, at least, that any possible ionization mechanism is so poorly efficient to ensure that columbic forces cannot play a major role in jet formation, even in cases where the liquid consists of a solution of ionic species. Further work is required to clearly understand how ionization occurs in ESI-MS.« less

  3. Beam position monitor engineering

    NASA Astrophysics Data System (ADS)

    Smith, Stephen R.

    1997-01-01

    The design of beam position monitors often involves challenging system design choices. Position transducers must be robust, accurate, and generate adequate position signal without unduly disturbing the beam. Electronics must be reliable and affordable, usually while meeting tough requirements on precision, accuracy, and dynamic range. These requirements may be difficult to achieve simultaneously, leading the designer into interesting opportunities for optimization or compromise. Some useful techniques and tools are shown. Both finite element analysis and analytic techniques will be used to investigate quasi-static aspects of electromagnetic fields such as the impedance of and the coupling of beam to striplines or buttons. Finite-element tools will be used to understand dynamic aspects of the electromagnetic fields of beams, such as wake fields and transmission-line and cavity effects in vacuum-to-air feedthroughs. Mathematical modeling of electrical signals through a processing chain will be demonstrated, in particular to illuminate areas where neither a pure time-domain nor a pure frequency-domain analysis is obviously advantageous. Emphasis will be on calculational techniques, in particular on using both time domain and frequency domain approaches to the applicable parts of interesting problems.

  4. Modeling silica aerogel optical performance by determining its radiative properties

    NASA Astrophysics Data System (ADS)

    Zhao, Lin; Yang, Sungwoo; Bhatia, Bikram; Strobach, Elise; Wang, Evelyn N.

    2016-02-01

    Silica aerogel has been known as a promising candidate for high performance transparent insulation material (TIM). Optical transparency is a crucial metric for silica aerogels in many solar related applications. Both scattering and absorption can reduce the amount of light transmitted through an aerogel slab. Due to multiple scattering, the transmittance deviates from the Beer-Lambert law (exponential attenuation). To better understand its optical performance, we decoupled and quantified the extinction contributions of absorption and scattering separately by identifying two sets of radiative properties. The radiative properties are deduced from the measured total transmittance and reflectance spectra (from 250 nm to 2500 nm) of synthesized aerogel samples by solving the inverse problem of the 1-D Radiative Transfer Equation (RTE). The obtained radiative properties are found to be independent of the sample geometry and can be considered intrinsic material properties, which originate from the aerogel's microstructure. This finding allows for these properties to be directly compared between different samples. We also demonstrate that by using the obtained radiative properties, we can model the photon transport in aerogels of arbitrary shapes, where an analytical solution is difficult to obtain.

  5. Thermal Analysis of ISS Service Module Active TCS

    NASA Technical Reports Server (NTRS)

    Altov, Vladimir V.; Zaletaev, Sergey V.; Belyavskiy, Evgeniy P.

    2000-01-01

    ISS Service Module mission must begin in July 2000. The verification of design thermal requirements is mostly due to thermal analysis. The thermal analysis is enough difficult problem because of large number of ISS configurations that had to be investigated and various orbital environments. Besides the ISS structure has articulating parts such as solar arrays and radiators. The presence of articulating parts greatly increases computation times and requires accurate approach to organization of calculations. The varying geometry needs us to calculate the view factors several times during the orbit, while in static geometry case we need do it only once. In this paper we consider the thermal mathematical model of SM that includes the TCS and construction thermal models and discuss the results of calculations for ISS configurations 1R and 9Al. The analysis is based on solving the nodal heat balance equations for ISS structure by Kutta-Merson method and analytical solutions of heat transfer equations for TCS units. The computations were performed using thermal software TERM [1,2] that will be briefly described.

  6. The Inverse Contagion Problem (ICP) vs.. Predicting site contagion in real time, when network links are not observable

    NASA Astrophysics Data System (ADS)

    Mushkin, I.; Solomon, S.

    2017-10-01

    We study the inverse contagion problem (ICP). As opposed to the direct contagion problem, in which the network structure is known and the question is when each node will be contaminated, in the inverse problem the links of the network are unknown but a sequence of contagion histories (the times when each node was contaminated) is observed. We consider two versions of the ICP: The strong problem (SICP), which is the reconstruction of the network and has been studied before, and the weak problem (WICP), which requires "only" the prediction (at each time step) of the nodes that will be contaminated at the next time step (this is often the real life situation in which a contagion is observed and predictions are made in real time). Moreover, our focus is on analyzing the increasing accuracy of the solution, as a function of the number of contagion histories already observed. For simplicity, we discuss the simplest (deterministic and synchronous) contagion dynamics and the simplest solution algorithm, which we have applied to different network types. The main result of this paper is that the complex problem of the convergence of the ICP for a network can be reduced to an individual property of pairs of nodes: the "false link difficulty". By definition, given a pair of unlinked nodes i and j, the difficulty of the false link (i,j) is the probability that in a random contagion history, the nodes i and j are not contaminated at the same time step (or at consecutive time steps). In other words, the "false link difficulty" of a non-existing network link is the probability that the observations during a random contagion history would not rule out that link. This probability is relatively straightforward to calculate, and in most instances relies only on the relative positions of the two nodes (i,j) and not on the entire network structure. We have observed the distribution of false link difficulty for various network types, estimated it theoretically and confronted it (successfully) with the numerical simulations. Based on it, we estimated analytically the convergence of the ICP solution (as a function of the number of contagion histories observed), and found it to be in perfect agreement with simulation results. Finally, the most important insight we obtained is that SICP and WICP are have quite different properties: if one in interested only in the operational aspect of predicting how contagion will spread, the links which are most difficult to decide about are the least influential on contagion dynamics. In other words, the parts of the network which are harder to reconstruct are also least important for predicting the contagion dynamics, up to the point where a (large) constant number of false links in the network (i.e. non-convergence of the network reconstruction procedure) implies a zero rate of the node contagion prediction errors (perfect convergence of the WICP). Thus, the contagion prediction problem (WICP) difficulty is very different from the network reconstruction problem (SICP), in as far as links which are difficult to reconstruct are quite harmless in terms of contagion prediction capability (WICP).

  7. Problem-Solving during Shared Reading at Kindergarten

    ERIC Educational Resources Information Center

    Gosen, Myrte N.; Berenst, Jan; de Glopper, Kees

    2015-01-01

    This paper reports on a conversation analytic study of problem-solving interactions during shared reading at three kindergartens in the Netherlands. It illustrates how teachers and pupils discuss book characters' problems that arise in the events in the picture books. A close analysis of the data demonstrates that problem-solving interactions do…

  8. The Analytical Solution of the Transient Radial Diffusion Equation with a Nonuniform Loss Term.

    NASA Astrophysics Data System (ADS)

    Loridan, V.; Ripoll, J. F.; De Vuyst, F.

    2017-12-01

    Many works have been done during the past 40 years to perform the analytical solution of the radial diffusion equation that models the transport and loss of electrons in the magnetosphere, considering a diffusion coefficient proportional to a power law in shell and a constant loss term. Here, we propose an original analytical method to address this challenge with a nonuniform loss term. The strategy is to match any L-dependent electron losses with a piecewise constant function on M subintervals, i.e., dealing with a constant lifetime on each subinterval. Applying an eigenfunction expansion method, the eigenvalue problem becomes presently a Sturm-Liouville problem with M interfaces. Assuming the continuity of both the distribution function and its first spatial derivatives, we are able to deal with a well-posed problem and to find the full analytical solution. We further show an excellent agreement between both the analytical solutions and the solutions obtained directly from numerical simulations for different loss terms of various shapes and with a diffusion coefficient DLL L6. We also give two expressions for the required number of eigenmodes N to get an accurate snapshot of the analytical solution, highlighting that N is proportional to 1/√t0, where t0 is a time of interest, and that N increases with the diffusion power. Finally, the equilibrium time, defined as the time to nearly reach the steady solution, is estimated by a closed-form expression and discussed. Applications to Earth and also Jupiter and Saturn are discussed.

  9. Dumbing down and the Politics of Neoliberalism in Film and/as Media Studies

    ERIC Educational Resources Information Center

    Ginsberg, Terri

    2003-01-01

    Film scholars are facing widespread pressures to desist from teaching modes of analytic and theoretical discourse that were once considered important to fostering critical understandings of moving-image culture, but which have since been denigrated as either too difficult (i.e., "elitist") or too controlling (i.e., "totalizing")--or both--in the…

  10. Creating a standardized and simplified cutting bill using group technology

    Treesearch

    Urs Buehlmann; Janice K. Wiedenbeck; R., Jr. Noble; D. Earl Kline

    2008-01-01

    From an analytical viewpoint, the relationship between rough mill cutting bill part requirements and lumber yield is highly complex. Part requirements can have almost any length, width, and quantity distribution within the boundaries set by physical limitations, such as maximum length and width of parts. This complexity makes it difficult to understand the specific...

  11. Counter-Colonial and Philosophical Claims: An Indigenous Observation of Western Philosophy

    ERIC Educational Resources Information Center

    Mika, Carl

    2015-01-01

    Providing an indigenous opinion on anything is a difficult task. To be sure, there is a multitude of possible indigenous responses to dominant Western philosophy. My aim in this paper is to assess dominant analytic Western philosophy in light of the general insistence of most indigenous authors that indigenous metaphysics is holistic, and to make…

  12. Using Multimodal Learning Analytics to Model Student Behaviour: A Systematic Analysis of Behavioural Framing

    ERIC Educational Resources Information Center

    Andrade, Alejandro; Delandshere, Ginette; Danish, Joshua A.

    2016-01-01

    One of the challenges many learning scientists face is the laborious task of coding large amounts of video data and consistently identifying social actions, which is time consuming and difficult to accomplish in a systematic and consistent manner. It is easier to catalog observable behaviours (e.g., body motions or gaze) without explicitly…

  13. Home Economics Reading Skills: Problems and Selected References.

    ERIC Educational Resources Information Center

    Cranney, A. Garr; And Others

    Home economics presents at least eight problems to secondary school reading teachers. These problems include poor readers, difficult reading material, lack of reading materials, teachers' lack of training in reading instruction, scarce information about home economics for reading teachers, diversity of the home economics field (requiring a wide…

  14. A Fiducial Approach to Extremes and Multiple Comparisons

    ERIC Educational Resources Information Center

    Wandler, Damian V.

    2010-01-01

    Generalized fiducial inference is a powerful tool for many difficult problems. Based on an extension of R. A. Fisher's work, we used generalized fiducial inference for two extreme value problems and a multiple comparison procedure. The first extreme value problem is dealing with the generalized Pareto distribution. The generalized Pareto…

  15. Students' and Teachers' Conceptual Metaphors for Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Yee, Sean P.

    2017-01-01

    Metaphors are regularly used by mathematics teachers to relate difficult or complex concepts in classrooms. A complex topic of concern in mathematics education, and most STEM-based education classes, is problem solving. This study identified how students and teachers contextualize mathematical problem solving through their choice of metaphors.…

  16. Cognitive Development, Genetics Problem Solving, and Genetics Instruction: A Critical Review.

    ERIC Educational Resources Information Center

    Smith, Mike U.; Sims, O. Suthern, Jr.

    1992-01-01

    Review of literature concerning problem solving in genetics and Piagetian stage theory. Authors conclude the research suggests that formal-operational thought is not strictly required for the solution of the majority of classical genetics problems; however, some genetic concepts are difficult for concrete operational students to understand.…

  17. Analytical approximations to seawater optical phase functions of scattering

    NASA Astrophysics Data System (ADS)

    Haltrin, Vladimir I.

    2004-11-01

    This paper proposes a number of analytical approximations to the classic and recently measured seawater light scattering phase functions. The three types of analytical phase functions are derived: individual representations for 15 Petzold, 41 Mankovsky, and 91 Gulf of Mexico phase functions; collective fits to Petzold phase functions; and analytical representations that take into account dependencies between inherent optical properties of seawater. The proposed phase functions may be used for problems of radiative transfer, remote sensing, visibility and image propagation in natural waters of various turbidity.

  18. Labour Market Policies.

    ERIC Educational Resources Information Center

    Danielsen, Reidar

    Skilled labor has always been difficult to recruit, and in a tight labor market unskilled, low-paying jobs with low status are also difficult to fill. Recruitment from outside seems necessary to satisfy demands, but migration creates at least as many problems as it solves. The consumption of theoretical training through the university level (a…

  19. Professional Support for Families in Difficult Life Situations

    ERIC Educational Resources Information Center

    Zakirova, Venera G.; Gaysina, Guzel I.; Raykova, Elena

    2016-01-01

    Relevance of the problem stated in the article is determined by the presence of a significant number of families in difficult life situations who need in professional support and socio-psychological assistance. The article aims to substantiate the effectiveness of the structural-functional model of professional supporting for families in difficult…

  20. End of the Day.

    ERIC Educational Resources Information Center

    Worth-Baker, Marcia

    2000-01-01

    Describes one seventh grade teacher's experiences with a student from a problem home who was known for his difficult behavior, noting the student's deep interest in the Trojan War and its circumstances, describing his death at age 16, and concluding that it is important to notice and cherish all students, even the difficult ones. (SM)

  1. Numerical Problems and Agent-Based Models for a Mass Transfer Course

    ERIC Educational Resources Information Center

    Murthi, Manohar; Shea, Lonnie D.; Snurr, Randall Q.

    2009-01-01

    Problems requiring numerical solutions of differential equations or the use of agent-based modeling are presented for use in a course on mass transfer. These problems were solved using the popular technical computing language MATLABTM. Students were introduced to MATLAB via a problem with an analytical solution. A more complex problem to which no…

  2. Siewert solutions of transcendental equations, generalized Lambert functions and physical applications

    NASA Astrophysics Data System (ADS)

    Barsan, Victor

    2018-05-01

    Several classes of transcendental equations, mainly eigenvalue equations associated to non-relativistic quantum mechanical problems, are analyzed. Siewert's systematic approach of such equations is discussed from the perspective of the new results recently obtained in the theory of generalized Lambert functions and of algebraic approximations of various special or elementary functions. Combining exact and approximate analytical methods, quite precise analytical outputs are obtained for apparently untractable problems. The results can be applied in quantum and classical mechanics, magnetism, elasticity, solar energy conversion, etc.

  3. NMESys: An expert system for network fault detection

    NASA Technical Reports Server (NTRS)

    Nelson, Peter C.; Warpinski, Janet

    1991-01-01

    The problem of network management is becoming an increasingly difficult and challenging task. It is very common today to find heterogeneous networks consisting of many different types of computers, operating systems, and protocols. The complexity of implementing a network with this many components is difficult enough, while the maintenance of such a network is an even larger problem. A prototype network management expert system, NMESys, implemented in the C Language Integrated Production System (CLIPS). NMESys concentrates on solving some of the critical problems encountered in managing a large network. The major goal of NMESys is to provide a network operator with an expert system tool to quickly and accurately detect hard failures, potential failures, and to minimize or eliminate user down time in a large network.

  4. Formal methods technology transfer: Some lessons learned

    NASA Technical Reports Server (NTRS)

    Hamilton, David

    1992-01-01

    IBM has a long history in the application of formal methods to software development and verification. There have been many successes in the development of methods, tools and training to support formal methods. And formal methods have been very successful on several projects. However, the use of formal methods has not been as widespread as hoped. This presentation summarizes several approaches that have been taken to encourage more widespread use of formal methods, and discusses the results so far. The basic problem is one of technology transfer, which is a very difficult problem. It is even more difficult for formal methods. General problems of technology transfer, especially the transfer of formal methods technology, are also discussed. Finally, some prospects for the future are mentioned.

  5. The analysis method of the DRAM cell pattern hotspot

    NASA Astrophysics Data System (ADS)

    Lee, Kyusun; Lee, Kweonjae; Chang, Jinman; Kim, Taeheon; Han, Daehan; Hong, Aeran; Kim, Yonghyeon; Kang, Jinyoung; Choi, Bumjin; Lee, Joosung; Lee, Jooyoung; Hong, Hyeongsun; Lee, Kyupil; Jin, Gyoyoung

    2015-03-01

    It is increasingly difficult to determine degree of completion of the patterning and the distribution at the DRAM Cell Patterns. When we research DRAM Device Cell Pattern, there are three big problems currently, it is as follows. First, due to etch loading, it is difficult to predict the potential defect. Second, due to under layer topology, it is impossible to demonstrate the influence of the hotspot. Finally, it is extremely difficult to predict final ACI pattern by the photo simulation, because current patterning process is double patterning technology which means photo pattern is completely different from final etch pattern. Therefore, if the hotspot occurs in wafer, it is very difficult to find it. CD-SEM is the most common pattern measurement tool in semiconductor fabrication site. CD-SEM is used to accurately measure small region of wafer pattern primarily. Therefore, there is no possibility of finding places where unpredictable defect occurs. Even though, "Current Defect detector" can measure a wide area, every chip has same pattern issue, the detector cannot detect critical hotspots. Because defect detecting algorithm of bright field machine is based on image processing, if same problems occur on compared and comparing chip, the machine cannot identify it. Moreover this instrument is not distinguished the difference of distribution about 1nm~3nm. So, "Defect detector" is difficult to handle the data for potential weak point far lower than target CD. In order to solve those problems, another method is needed. In this paper, we introduce the analysis method of the DRAM Cell Pattern Hotspot.

  6. Observability during planetary approach navigation

    NASA Technical Reports Server (NTRS)

    Bishop, Robert H.; Burkhart, P. Daniel; Thurman, Sam W.

    1993-01-01

    The objective of the research is to develop an analytic technique to predict the relative navigation capability of different Earth-based radio navigation measurements. In particular, the problem is to determine the relative ability of geocentric range and Doppler measurements to detect the effects of the target planet gravitational attraction on the spacecraft during the planetary approach and near-encounter mission phases. A complete solution to the two-dimensional problem has been developed. Relatively simple analytic formulas are obtained for range and Doppler measurements which describe the observability content of the measurement data along the approach trajectories. An observability measure is defined which is based on the observability matrix for nonlinear systems. The results show good agreement between the analytic observability analysis and the computational batch processing method.

  7. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna

    2017-08-01

    Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  8. Matrix vapor deposition/recrystallization and dedicated spray preparation for high-resolution scanning microprobe matrix-assisted laser desorption/ionization imaging mass spectrometry (SMALDI-MS) of tissue and single cells.

    PubMed

    Bouschen, Werner; Schulz, Oliver; Eikel, Daniel; Spengler, Bernhard

    2010-02-01

    Matrix preparation techniques such as air spraying or vapor deposition were investigated with respect to lateral migration, integration of analyte into matrix crystals and achievable lateral resolution for the purpose of high-resolution biological imaging. The accessible mass range was found to be beyond 5000 u with sufficient analytical sensitivity. Gas-assisted spraying methods (using oxygen-free gases) provide a good compromise between crystal integration of analyte and analyte migration within the sample. Controlling preparational parameters with this method, however, is difficult. Separation of the preparation procedure into two steps, instead, leads to an improved control of migration and incorporation. The first step is a dry vapor deposition of matrix onto the investigated sample. In a second step, incorporation of analyte into the matrix crystal is enhanced by a controlled recrystallization of matrix in a saturated water atmosphere. With this latter method an effective analytical resolution of 2 microm in the x and y direction was achieved for scanning microprobe matrix-assisted laser desorption/ionization imaging mass spectrometry (SMALDI-MS). Cultured A-498 cells of human renal carcinoma were successfully investigated by high-resolution MALDI imaging using the new preparation techniques. Copyright 2010 John Wiley & Sons, Ltd.

  9. Reduction of multi-dimensional laboratory data to a two-dimensional plot: a novel technique for the identification of laboratory error.

    PubMed

    Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A

    2007-01-01

    The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.

  10. Understanding disordered systems through numerical simulation and algorithm development

    NASA Astrophysics Data System (ADS)

    Sweeney, Sean Michael

    Disordered systems arise in many physical contexts. Not all matter is uniform, and impurities or heterogeneities can be modeled by fixed random disorder. Numerous complex networks also possess fixed disorder, leading to applications in transportation systems, telecommunications, social networks, and epidemic modeling, to name a few. Due to their random nature and power law critical behavior, disordered systems are difficult to study analytically. Numerical simulation can help overcome this hurdle by allowing for the rapid computation of system states. In order to get precise statistics and extrapolate to the thermodynamic limit, large systems must be studied over many realizations. Thus, innovative algorithm development is essential in order reduce memory or running time requirements of simulations. This thesis presents a review of disordered systems, as well as a thorough study of two particular systems through numerical simulation, algorithm development and optimization, and careful statistical analysis of scaling properties. Chapter 1 provides a thorough overview of disordered systems, the history of their study in the physics community, and the development of techniques used to study them. Topics of quenched disorder, phase transitions, the renormalization group, criticality, and scale invariance are discussed. Several prominent models of disordered systems are also explained. Lastly, analysis techniques used in studying disordered systems are covered. In Chapter 2, minimal spanning trees on critical percolation clusters are studied, motivated in part by an analytic perturbation expansion by Jackson and Read that I check against numerical calculations. This system has a direct mapping to the ground state of the strongly disordered spin glass. We compute the path length fractal dimension of these trees in dimensions d = {2, 3, 4, 5} and find our results to be compatible with the analytic results suggested by Jackson and Read. In Chapter 3, the random bond Ising ferromagnet is studied, which is especially useful since it serves as a prototype for more complicated disordered systems such as the random field Ising model and spin glasses. We investigate the effect that changing boundary spins has on the locations of domain walls in the interior of the random ferromagnet system. We provide an analytic proof that ground state domain walls in the two dimensional system are decomposable, and we map these domain walls to a shortest paths problem. By implementing a multiple-source shortest paths algorithm developed by Philip Klein, we are able to efficiently probe domain wall locations for all possible configurations of boundary spins. We consider lattices with uncorrelated dis- order, as well as disorder that is spatially correlated according to a power law. We present numerical results for the scaling exponent governing the probability that a domain wall can be induced that passes through a particular location in the system's interior, and we compare these results to previous results on the directed polymer problem.

  11. The program FANS-3D (finite analytic numerical simulation 3-dimensional) and its applications

    NASA Technical Reports Server (NTRS)

    Bravo, Ramiro H.; Chen, Ching-Jen

    1992-01-01

    In this study, the program named FANS-3D (Finite Analytic Numerical Simulation-3 Dimensional) is presented. FANS-3D was designed to solve problems of incompressible fluid flow and combined modes of heat transfer. It solves problems with conduction and convection modes of heat transfer in laminar flow, with provisions for radiation and turbulent flows. It can solve singular or conjugate modes of heat transfer. It also solves problems in natural convection, using the Boussinesq approximation. FANS-3D was designed to solve heat transfer problems inside one, two and three dimensional geometries that can be represented by orthogonal planes in a Cartesian coordinate system. It can solve internal and external flows using appropriate boundary conditions such as symmetric, periodic and user specified.

  12. Analytical Chemistry of Surfaces: Part II. Electron Spectroscopy.

    ERIC Educational Resources Information Center

    Hercules, David M.; Hercules, Shirley H.

    1984-01-01

    Discusses two surface techniques: X-ray photoelectron spectroscopy (ESCA) and Auger electron spectroscopy (AES). Focuses on fundamental aspects of each technique, important features of instrumentation, and some examples of how ESCA and AES have been applied to analytical surface problems. (JN)

  13. An Assessment of the Effect of Collaborative Groups on Students' Problem-Solving Strategies and Abilities

    ERIC Educational Resources Information Center

    Cooper, Melanie M.; Cox, Charles T., Jr.; Nammouz, Minory; Case, Edward; Stevens, Ronald

    2008-01-01

    Improving students' problem-solving skills is a major goal for most science educators. While a large body of research on problem solving exists, assessment of meaningful problem solving is very difficult, particularly for courses with large numbers of students in which one-on-one interactions are not feasible. We have used a suite of software…

  14. Solving Classical Insight Problems without Aha! Experience: 9 Dot, 8 Coin, and Matchstick Arithmetic Problems

    ERIC Educational Resources Information Center

    Danek, Amory H.; Wiley, Jennifer; Öllinger, Michael

    2016-01-01

    Insightful problem solving is a vital part of human thinking, yet very difficult to grasp. Traditionally, insight has been investigated by using a set of established "insight tasks," assuming that insight has taken place if these problems are solved. Instead of assuming that insight takes place during every solution of the 9 Dot, 8 Coin,…

  15. Practice-based evidence: profiling the safety of cilostazol by text-mining of clinical notes.

    PubMed

    Leeper, Nicholas J; Bauer-Mehren, Anna; Iyer, Srinivasan V; Lependu, Paea; Olson, Cliff; Shah, Nigam H

    2013-01-01

    Peripheral arterial disease (PAD) is a growing problem with few available therapies. Cilostazol is the only FDA-approved medication with a class I indication for intermittent claudication, but carries a black box warning due to concerns for increased cardiovascular mortality. To assess the validity of this black box warning, we employed a novel text-analytics pipeline to quantify the adverse events associated with Cilostazol use in a clinical setting, including patients with congestive heart failure (CHF). We analyzed the electronic medical records of 1.8 million subjects from the Stanford clinical data warehouse spanning 18 years using a novel text-mining/statistical analytics pipeline. We identified 232 PAD patients taking Cilostazol and created a control group of 1,160 PAD patients not taking this drug using 1:5 propensity-score matching. Over a mean follow up of 4.2 years, we observed no association between Cilostazol use and any major adverse cardiovascular event including stroke (OR = 1.13, CI [0.82, 1.55]), myocardial infarction (OR = 1.00, CI [0.71, 1.39]), or death (OR = 0.86, CI [0.63, 1.18]). Cilostazol was not associated with an increase in any arrhythmic complication. We also identified a subset of CHF patients who were prescribed Cilostazol despite its black box warning, and found that it did not increase mortality in this high-risk group of patients. This proof of principle study shows the potential of text-analytics to mine clinical data warehouses to uncover 'natural experiments' such as the use of Cilostazol in CHF patients. We envision this method will have broad applications for examining difficult to test clinical hypotheses and to aid in post-marketing drug safety surveillance. Moreover, our observations argue for a prospective study to examine the validity of a drug safety warning that may be unnecessarily limiting the use of an efficacious therapy.

  16. Practice-Based Evidence: Profiling the Safety of Cilostazol by Text-Mining of Clinical Notes

    PubMed Central

    Iyer, Srinivasan V.; LePendu, Paea; Olson, Cliff; Shah, Nigam H.

    2013-01-01

    Background Peripheral arterial disease (PAD) is a growing problem with few available therapies. Cilostazol is the only FDA-approved medication with a class I indication for intermittent claudication, but carries a black box warning due to concerns for increased cardiovascular mortality. To assess the validity of this black box warning, we employed a novel text-analytics pipeline to quantify the adverse events associated with Cilostazol use in a clinical setting, including patients with congestive heart failure (CHF). Methods and Results We analyzed the electronic medical records of 1.8 million subjects from the Stanford clinical data warehouse spanning 18 years using a novel text-mining/statistical analytics pipeline. We identified 232 PAD patients taking Cilostazol and created a control group of 1,160 PAD patients not taking this drug using 1∶5 propensity-score matching. Over a mean follow up of 4.2 years, we observed no association between Cilostazol use and any major adverse cardiovascular event including stroke (OR = 1.13, CI [0.82, 1.55]), myocardial infarction (OR = 1.00, CI [0.71, 1.39]), or death (OR = 0.86, CI [0.63, 1.18]). Cilostazol was not associated with an increase in any arrhythmic complication. We also identified a subset of CHF patients who were prescribed Cilostazol despite its black box warning, and found that it did not increase mortality in this high-risk group of patients. Conclusions This proof of principle study shows the potential of text-analytics to mine clinical data warehouses to uncover ‘natural experiments’ such as the use of Cilostazol in CHF patients. We envision this method will have broad applications for examining difficult to test clinical hypotheses and to aid in post-marketing drug safety surveillance. Moreover, our observations argue for a prospective study to examine the validity of a drug safety warning that may be unnecessarily limiting the use of an efficacious therapy. PMID:23717437

  17. Experimental issues related to frequency response function measurements for frequency-based substructuring

    NASA Astrophysics Data System (ADS)

    Nicgorski, Dana; Avitabile, Peter

    2010-07-01

    Frequency-based substructuring is a very popular approach for the generation of system models from component measured data. Analytically the approach has been shown to produce accurate results. However, implementation with actual test data can cause difficulties and cause problems with the system response prediction. In order to produce good results, extreme care is needed in the measurement of the drive point and transfer impedances of the structure as well as observe all the conditions for a linear time invariant system. Several studies have been conducted to show the sensitivity of the technique to small variations that often occur during typical testing of structures. These variations have been observed in actual tested configurations and have been substantiated with analytical models to replicate the problems typically encountered. The use of analytically simulated issues helps to clearly see the effects of typical measurement difficulties often observed in test data. This paper presents some of these common problems observed and provides guidance and recommendations for data to be used for this modeling approach.

  18. An analytic-geometric model of the effect of spherically distributed injection errors for Galileo and Ulysses spacecraft - The multi-stage problem

    NASA Technical Reports Server (NTRS)

    Longuski, James M.; Mcronald, Angus D.

    1988-01-01

    In previous work the problem of injecting the Galileo and Ulysses spacecraft from low earth orbit into their respective interplanetary trajectories has been discussed for the single stage (Centaur) vehicle. The central issue, in the event of spherically distributed injection errors, is what happens to the vehicle? The difficulties addressed in this paper involve the multi-stage problem since both Galileo and Ulysses will be utilizing the two-stage IUS system. Ulysses will also include a third stage: the PAM-S. The solution is expressed in terms of probabilities for total percentage of escape, orbit decay and reentry trajectories. Analytic solutions are found for Hill's Equations of Relative Motion (more recently called Clohessy-Wiltshire Equations) for multi-stage injections. These solutions are interpreted geometrically on the injection sphere. The analytic-geometric models compare well with numerical solutions, provide insight into the behavior of trajectories mapped on the injection sphere and simplify the numerical two-dimensional search for trajectory families.

  19. An analytical method for the inverse Cauchy problem of Lame equation in a rectangle

    NASA Astrophysics Data System (ADS)

    Grigor’ev, Yu

    2018-04-01

    In this paper, we present an analytical computational method for the inverse Cauchy problem of Lame equation in the elasticity theory. A rectangular domain is frequently used in engineering structures and we only consider the analytical solution in a two-dimensional rectangle, wherein a missing boundary condition is recovered from the full measurement of stresses and displacements on an accessible boundary. The essence of the method consists in solving three independent Cauchy problems for the Laplace and Poisson equations. For each of them, the Fourier series is used to formulate a first-kind Fredholm integral equation for the unknown function of data. Then, we use a Lavrentiev regularization method, and the termwise separable property of kernel function allows us to obtain a closed-form regularized solution. As a result, for the displacement components, we obtain solutions in the form of a sum of series with three regularization parameters. The uniform convergence and error estimation of the regularized solutions are proved.

  20. Axisymmetric capillary-gravity waves at the interface of two viscous, immiscible fluids - Initial value problem

    NASA Astrophysics Data System (ADS)

    Farsoiya, Palas Kumar; Dasgupta, Ratul

    2017-11-01

    When the interface between two radially unbounded, viscous fluids lying vertically in a stable configuration (denser fluid below) at rest, is perturbed, radially propagating capillary-gravity waves are formed which damp out with time. We study this process analytically using a recently developed linearised theory. For small amplitude initial perturbations, the analytical solution to the initial value problem, represented as a linear superposition of Bessel modes at time t = 0 , is found to agree very well with results obtained from direct numerical simulations of the Navier-Stokes equations, for a range of initial conditions. Our study extends the earlier work by John W. Miles who studied this initial value problem analytically, taking into account, a single viscous fluid only. Implications of this study for the mechanistic understanding of droplet impact into a deep pool, will be discussed. Some preliminary, qualitative comparison with experiments will also be presented. We thank SERB Dept. Science & Technology, Govt. of India, Grant No. EMR/2016/000830 for financial support.

  1. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons

    PubMed Central

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785

  2. MICROBIAL SOURCE TRACKING

    EPA Science Inventory

    Fecal contamination of waters used for recreation, drinking water, and aquaculture is an environmental problem and poses significant human health risks. The problem is often difficult to correct because the source of the contamination cannot be determined with certainty. Run-of...

  3. Evolving neural networks for strategic decision-making problems.

    PubMed

    Kohl, Nate; Miikkulainen, Risto

    2009-04-01

    Evolution of neural networks, or neuroevolution, has been a successful approach to many low-level control problems such as pole balancing, vehicle control, and collision warning. However, certain types of problems-such as those involving strategic decision-making-have remained difficult for neuroevolution to solve. This paper evaluates the hypothesis that such problems are difficult because they are fractured: The correct action varies discontinuously as the agent moves from state to state. A method for measuring fracture using the concept of function variation is proposed and, based on this concept, two methods for dealing with fracture are examined: neurons with local receptive fields, and refinement based on a cascaded network architecture. Experiments in several benchmark domains are performed to evaluate how different levels of fracture affect the performance of neuroevolution methods, demonstrating that these two modifications improve performance significantly. These results form a promising starting point for expanding neuroevolution to strategic tasks.

  4. Municipalities' Priority Problems and Prospect of Establishing Ordinance to Measures for Marginal Hamlets

    NASA Astrophysics Data System (ADS)

    Nakanishi, Mayumi; Hoshino, Satoshi; Hashimoto, Shizuka; Kuki, Yasuaki

    The problems of Marginal Hamlets are getting worse, in which more than half of the population is over 65 and community-based life is difficult. To contribute to effective policy making, we conducted a questionnaire survey to members of the National Liaison Council of ‘Suigen no Sato’ constituted to share information about problems and effective counter measures for marginal hamlets. Our study clarified that first, most of respondents had common problems such as lack of job-opportunities and animal damage on farm, and second, though most of respondents recognized the effectiveness of selecting target communities in policy implementations, it is difficult for municipal governments to establish such ordinance provided that councilors and those who were not living in areas of policy target wouldn't agree with it. Finally, we pointed out the roles of national and prefectural governments to help municipal governments effectively cope with such entangled situations.

  5. Analytic and heuristic processes in the detection and resolution of conflict.

    PubMed

    Ferreira, Mário B; Mata, André; Donkin, Christopher; Sherman, Steven J; Ihmels, Max

    2016-10-01

    Previous research with the ratio-bias task found larger response latencies for conflict trials where the heuristic- and analytic-based responses are assumed to be in opposition (e.g., choosing between 1/10 and 9/100 ratios of success) when compared to no-conflict trials where both processes converge on the same response (e.g., choosing between 1/10 and 11/100). This pattern is consistent with parallel dual-process models, which assume that there is effective, rather than lax, monitoring of the output of heuristic processing. It is, however, unclear why conflict resolution sometimes fails. Ratio-biased choices may increase because of a decline in analytical reasoning (leaving heuristic-based responses unopposed) or to a rise in heuristic processing (making it more difficult for analytic processes to override the heuristic preferences). Using the process-dissociation procedure, we found that instructions to respond logically and response speed affected analytic (controlled) processing (C), leaving heuristic processing (H) unchanged, whereas the intuitive preference for large nominators (as assessed by responses to equal ratio trials) affected H but not C. These findings create new challenges to the debate between dual-process and single-process accounts, which are discussed.

  6. A Comparison of Geometry Problems in Middle-Grade Mathematics Textbooks from Taiwan, Singapore, Finland, and the United States

    ERIC Educational Resources Information Center

    Yang, Der-Ching; Tseng, Yi-Kuan; Wang, Tzu-Ling

    2017-01-01

    This study analyzed geometry problems in four middle-grade mathematics textbook series from Taiwan, Singapore, Finland, and the United States, while exploring the expectations for students' learning experiences with these problems. An analytical framework developed for mathematics textbook problem analysis had three dimensions: representation…

  7. The effect of question format and task difficulty on reasoning strategies and diagnostic performance in Internal Medicine residents.

    PubMed

    Heemskerk, Laura; Norman, Geoff; Chou, Sophia; Mintz, Marcy; Mandin, Henry; McLaughlin, Kevin

    2008-11-01

    Previous studies have suggested an association between reasoning strategies and diagnostic success, but the influence on this relationship of variables such as question format and task difficulty, has not been studied. Our objective was to study the association between question format, task difficulty, reasoning strategies and diagnostic success. Study participants were 13 Internal Medicine residents at the University of Calgary. Each was given eight problem-solving questions in four clinical presentations and were randomized to groups that differed only in the question format, such that a question presented as short answer (SA) to the first group was presented as extended matching (EM) to the second group. There were equal numbers of SA/EM questions and straightforward/difficult tasks. Participants performed think-aloud during diagnostic reasoning. Data were analyzed using multiple logistic regression. Question format was associated with reasoning strategies; hypothetico-deductive reasoning being used more frequently on EM questions and scheme-inductive reasoning on SA questions. For SA question, non-analytic reasoning alone was used more frequently to answer straightforward cases than difficult cases, whereas for EM questions no such association was observed. EM format and straightforward task increased the odds of diagnostic success, whereas hypothetico-deductive reasoning was associated with reduced odds of success. Question format and task difficulty both influence diagnostic reasoning strategies and studies that examine the effect of reasoning strategies on diagnostic success should control for these effects. Further studies are needed to investigate the effect of reasoning strategies on performance of different groups of learners.

  8. A Widely Applicable Silver Sol for TLC Detection with Rich and Stable SERS Features.

    PubMed

    Zhu, Qingxia; Li, Hao; Lu, Feng; Chai, Yifeng; Yuan, Yongfang

    2016-12-01

    Thin-layer chromatography (TLC) coupled with surface-enhanced Raman spectroscopy (SERS) has gained tremendous popularity in the study of various complex systems. However, the detection of hydrophobic analytes is difficult, and the specificity still needs to be improved. In this study, a SERS-active non-aqueous silver sol which could activate the analytes to produce rich and stable spectral features was rapidly synthesized. Then, the optimized silver nanoparticles (AgNPs)-DMF sol was employed for TLC-SERS detection of hydrophobic (and also hydrophilic) analytes. SERS performance of this sol was superior to that of traditional Lee-Meisel AgNPs due to its high specificity, acceptable stability, and wide applicability. The non-aqueous AgNPs would be suitable for the TLC-SERS method, which shows great promise for applications in food safety assurance, environmental monitoring, medical diagnoses, and many other fields.

  9. A Widely Applicable Silver Sol for TLC Detection with Rich and Stable SERS Features

    NASA Astrophysics Data System (ADS)

    Zhu, Qingxia; Li, Hao; Lu, Feng; Chai, Yifeng; Yuan, Yongfang

    2016-04-01

    Thin-layer chromatography (TLC) coupled with surface-enhanced Raman spectroscopy (SERS) has gained tremendous popularity in the study of various complex systems. However, the detection of hydrophobic analytes is difficult, and the specificity still needs to be improved. In this study, a SERS-active non-aqueous silver sol which could activate the analytes to produce rich and stable spectral features was rapidly synthesized. Then, the optimized silver nanoparticles (AgNPs)-DMF sol was employed for TLC-SERS detection of hydrophobic (and also hydrophilic) analytes. SERS performance of this sol was superior to that of traditional Lee-Meisel AgNPs due to its high specificity, acceptable stability, and wide applicability. The non-aqueous AgNPs would be suitable for the TLC-SERS method, which shows great promise for applications in food safety assurance, environmental monitoring, medical diagnoses, and many other fields.

  10. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  11. A new multi-step technique with differential transform method for analytical solution of some nonlinear variable delay differential equations.

    PubMed

    Benhammouda, Brahim; Vazquez-Leal, Hector

    2016-01-01

    This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.

  12. Factors of Problem-Solving Competency in a Virtual Chemistry Environment: The Role of Metacognitive Knowledge about Strategies

    ERIC Educational Resources Information Center

    Scherer, Ronny; Tiemann, Rudiger

    2012-01-01

    The ability to solve complex scientific problems is regarded as one of the key competencies in science education. Until now, research on problem solving focused on the relationship between analytical and complex problem solving, but rarely took into account the structure of problem-solving processes and metacognitive aspects. This paper,…

  13. Teaching Analytical Thinking

    ERIC Educational Resources Information Center

    Behn, Robert D.; Vaupel, James W.

    1976-01-01

    Description of the philosophy and general nature of a course at Drake University that emphasizes basic concepts of analytical thinking, including think, decompose, simplify, specify, and rethink problems. Some sample homework exercises are included. The journal is available from University of California Press, Berkeley, California 94720.…

  14. BIOMOLECULAR SENSING FOR BIOLOGICAL PROCESSES AND ENVIRONMENTAL MONITORING APPLICATIONS

    EPA Science Inventory

    Biomolecular recognition is being increasingly employed as the basis for a variety of analytical methods such as biosensors. he sensitivity, selectivity, and format versatility inherent in these methods may allow them to be adapted to solving a number of analytical problems. ltho...

  15. Fast and Efficient Discrimination of Traveling Salesperson Problem Stimulus Difficulty

    ERIC Educational Resources Information Center

    Dry, Matthew J.; Fontaine, Elizabeth L.

    2014-01-01

    The Traveling Salesperson Problem (TSP) is a computationally difficult combinatorial optimization problem. In spite of its relative difficulty, human solvers are able to generate close-to-optimal solutions in a close-to-linear time frame, and it has been suggested that this is due to the visual system's inherent sensitivity to certain geometric…

  16. The Software Problem.

    ERIC Educational Resources Information Center

    Walker, Decker F.

    This paper addresses the reasons that it is difficult to find good educational software and proposes measures for coping with this problem. The fundamental problem is a shortange of educational software that can be used as a major part of the teaching of academic subjects in elementary and secondary schools--a shortage that is both the effect and…

  17. Why Inquiry Is Inherently Difficult...and Some Ways to Make It Easier

    ERIC Educational Resources Information Center

    Meyer, Daniel Z.; Avery, Leanne M.

    2010-01-01

    In this article, the authors offer a framework that identifies two critical problems in designing inquiry-based instruction and suggests three models for developing instruction that overcomes those problems. The Protocol Model overcomes the Getting on Board Problem by providing students an initial experience through clearly delineated steps with a…

  18. How to Arrive at Good Research Questions?

    ERIC Educational Resources Information Center

    Gafoor, K. Abdul

    2008-01-01

    Identifying an area of research a topic, deciding on a problem, and formulating it in to a researchable question are very difficult stages in the whole research process at least for beginners. Few books on research methodology elaborates the various process involved in problem selection and clarification. Viewing research and problem selection as…

  19. Integrating Worked Examples into Problem Posing in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Hsiao, Ju-Yuan; Hung, Chun-Ling; Lan, Yu-Feng; Jeng, Yoau-Chau

    2013-01-01

    Most students always lack of experience and perceive difficult regarding problem posing. The study hypothesized that worked examples may have benefits for supporting students' problem posing activities. A quasi-experiment was conducted in the context of a business mathematics course for examining the effects of integrating worked examples into…

  20. Two solvable problems of planar geometrical optics.

    PubMed

    Borghero, Francesco; Bozis, George

    2006-12-01

    In the framework of geometrical optics we consider a two-dimensional transparent inhomogeneous isotropic medium (dispersive or not). We show that (i) for any family belonging to a certain class of planar monoparametric families of monochromatic light rays given in the form f(x,y)=c of any definite color and satisfying a differential condition, all the refractive index profiles n=n(x,y) allowing for the creation of the given family can be found analytically (inverse problem) and that (ii) for any member of a class of two-dimensional refractive index profiles n=n(x,y) satisfying a differential condition, all the compatible families of light rays can be found analytically (direct problem). We present appropriate examples.

  1. Simplified computational methods for elastic and elastic-plastic fracture problems

    NASA Technical Reports Server (NTRS)

    Atluri, Satya N.

    1992-01-01

    An overview is given of some of the recent (1984-1991) developments in computational/analytical methods in the mechanics of fractures. Topics covered include analytical solutions for elliptical or circular cracks embedded in isotropic or transversely isotropic solids, with crack faces being subjected to arbitrary tractions; finite element or boundary element alternating methods for two or three dimensional crack problems; a 'direct stiffness' method for stiffened panels with flexible fasteners and with multiple cracks; multiple site damage near a row of fastener holes; an analysis of cracks with bonded repair patches; methods for the generation of weight functions for two and three dimensional crack problems; and domain-integral methods for elastic-plastic or inelastic crack mechanics.

  2. Behavior analytic approaches to problem behavior in intellectual disabilities.

    PubMed

    Hagopian, Louis P; Gregory, Meagan K

    2016-03-01

    The purpose of the current review is to summarize recent behavior analytic research on problem behavior in individuals with intellectual disabilities. We have focused our review on studies published from 2013 to 2015, but also included earlier studies that were relevant. Behavior analytic research on problem behavior continues to focus on the use and refinement of functional behavioral assessment procedures and function-based interventions. During the review period, a number of studies reported on procedures aimed at making functional analysis procedures more time efficient. Behavioral interventions continue to evolve, and there were several larger scale clinical studies reporting on multiple individuals. There was increased attention on the part of behavioral researchers to develop statistical methods for analysis of within subject data and continued efforts to aggregate findings across studies through evaluative reviews and meta-analyses. Findings support continued utility of functional analysis for guiding individualized interventions and for classifying problem behavior. Modifications designed to make functional analysis more efficient relative to the standard method of functional analysis were reported; however, these require further validation. Larger scale studies on behavioral assessment and treatment procedures provided additional empirical support for effectiveness of these approaches and their sustainability outside controlled clinical settings.

  3. Class and Home Problems. The Lambert W Function in Ultrafiltration and Diafiltration

    ERIC Educational Resources Information Center

    Foley, Greg

    2016-01-01

    Novel analytical solutions based on the Lambert W function for two problems in ultrafiltration and diafiltration are described. Example problems, suitable for incorporation into an introductory module in unit operations, membrane processing, or numerical methods are provided in each case.

  4. Variational Trajectory Optimization Tool Set: Technical description and user's manual

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.

    1993-01-01

    The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.

  5. The politics of insight

    PubMed Central

    Salvi, Carola; Cristofori, Irene; Grafman, Jordan; Beeman, Mark

    2016-01-01

    Previous studies showed that liberals and conservatives differ in cognitive style. Liberals are more flexible, and tolerant of complexity and novelty, whereas conservatives are more rigid, are more resistant to change, and prefer clear answers. We administered a set of compound remote associate problems, a task extensively used to differentiate problem-solving styles (via insight or analysis). Using this task, several researches have proven that self-reports, which differentiate between insight and analytic problem-solving, are reliable and are associated with two different neural circuits. In our research we found that participants self-identifying with distinct political orientations demonstrated differences in problem-solving strategy. Liberals solved significantly more problems via insight instead of in a step-by-step analytic fashion. Our findings extend previous observations that self-identified political orientations reflect differences in cognitive styles. More specifically, we show that type of political orientation is associated with problem-solving strategy. The data converge with previous neurobehavioural and cognitive studies indicating a link between cognitive style and the psychological mechanisms that mediate political beliefs. PMID:26810954

  6. The politics of insight.

    PubMed

    Salvi, Carola; Cristofori, Irene; Grafman, Jordan; Beeman, Mark

    2016-01-01

    Previous studies showed that liberals and conservatives differ in cognitive style. Liberals are more flexible, and tolerant of complexity and novelty, whereas conservatives are more rigid, are more resistant to change, and prefer clear answers. We administered a set of compound remote associate problems, a task extensively used to differentiate problem-solving styles (via insight or analysis). Using this task, several researches have proven that self-reports, which differentiate between insight and analytic problem-solving, are reliable and are associated with two different neural circuits. In our research we found that participants self-identifying with distinct political orientations demonstrated differences in problem-solving strategy. Liberals solved significantly more problems via insight instead of in a step-by-step analytic fashion. Our findings extend previous observations that self-identified political orientations reflect differences in cognitive styles. More specifically, we show that type of political orientation is associated with problem-solving strategy. The data converge with previous neurobehavioural and cognitive studies indicating a link between cognitive style and the psychological mechanisms that mediate political beliefs.

  7. Identifying Common Mathematical Misconceptions from Actions in Educational Video Games. CRESST Report 838

    ERIC Educational Resources Information Center

    Kerr, Deirdre

    2014-01-01

    Educational video games provide an opportunity for students to interact with and explore complex representations of academic content and allow for the examination of problem-solving strategies and mistakes that can be difficult to capture in more traditional environments. However, data from such games are notoriously difficult to analyze. This…

  8. Overcoming Misconceptions in Neurophysiology Learning: An Approach Using Color-Coded Animations

    ERIC Educational Resources Information Center

    Guy, Richard

    2012-01-01

    Anyone who has taught neurophysiology would be aware of recurring concepts that students find difficult to understand. However, a greater problem is the development of misconceptions that may be difficult to change. For example, one common misconception is that action potentials pass directly across chemical synapses. Difficulties may be…

  9. Training Teachers To Work in Schools Considered Difficult. Fundamentals of Educational Planning Series, Number 59.

    ERIC Educational Resources Information Center

    Auduc, Jean-Louis

    This book outlines challenges involved in ensuring that teacher training becomes the gateway to implementation of appropriate strategies for students to achieve and for managing the problems of authority, discipline, and aggressive behavior. The six chapters examine: (1) "Teaching in Schools or Classes Considered Difficult: A Contemporary…

  10. Reflectivity of crack sealant.

    DOT National Transportation Integrated Search

    2002-01-01

    Crack sealing is used in road maintenance but presents a problem when crack seal material visually pops out on the roadway, making it difficult to see lane stripes. This problem will increase as New Mexico increases its use of crack sealants. This su...

  11. The structure of common emotion regulation strategies: A meta-analytic examination.

    PubMed

    Naragon-Gainey, Kristin; McMahon, Tierney P; Chacko, Thomas P

    2017-04-01

    Emotion regulation has been examined extensively with regard to important outcomes, including psychological and physical health. However, the literature includes many different emotion regulation strategies but little examination of how they relate to one another, making it difficult to interpret and synthesize findings. The goal of this meta-analysis was to examine the underlying structure of common emotion regulation strategies (i.e., acceptance, behavioral avoidance, distraction, experiential avoidance, expressive suppression, mindfulness, problem solving, reappraisal, rumination, worry), and to evaluate this structure in light of theoretical models of emotion regulation. We also examined how distress tolerance-an important emotion regulation ability -relates to strategy use. We conducted meta-analyses estimating the correlations between emotion regulation strategies (based on 331 samples and 670 effect sizes), as well as between distress tolerance and strategies. The resulting meta-analytic correlation matrix was submitted to confirmatory and exploratory factor analyses. None of the confirmatory models, based on prior theory, was an acceptable fit to the data. Exploratory factor analysis suggested that 3 underlying factors best characterized these data. Two factors-labeled Disengagement and Aversive Cognitive Perseveration-emerged as strongly correlated but distinct factors, with the latter consisting of putatively maladaptive strategies. The third factor, Adaptive Engagement, was a less unified factor and weakly related to the other 2 factors. Distress tolerance was most closely associated with low levels of repetitive negative thought and experiential avoidance, and high levels of acceptance and mindfulness. We discuss the theoretical implications of these findings and applications to emotion regulation assessment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Free-Suspension Residual Flexibility Testing of Space Station Pathfinder: Comparison to Fixed-Base Results

    NASA Technical Reports Server (NTRS)

    Tinker, Michael L.

    1998-01-01

    Application of the free-suspension residual flexibility modal test method to the International Space Station Pathfinder structure is described. The Pathfinder, a large structure of the general size and weight of Space Station module elements, was also tested in a large fixed-base fixture to simulate Shuttle Orbiter payload constraints. After correlation of the Pathfinder finite element model to residual flexibility test data, the model was coupled to a fixture model, and constrained modes and frequencies were compared to fixed-base test. modes. The residual flexibility model compared very favorably to results of the fixed-base test. This is the first known direct comparison of free-suspension residual flexibility and fixed-base test results for a large structure. The model correlation approach used by the author for residual flexibility data is presented. Frequency response functions (FRF) for the regions of the structure that interface with the environment (a test fixture or another structure) are shown to be the primary tools for model correlation that distinguish or characterize the residual flexibility approach. A number of critical issues related to use of the structure interface FRF for correlating the model are then identified and discussed, including (1) the requirement of prominent stiffness lines, (2) overcoming problems with measurement noise which makes the antiresonances or minima in the functions difficult to identify, and (3) the use of interface stiffness and lumped mass perturbations to bring the analytical responses into agreement with test data. It is shown that good comparison of analytical-to-experimental FRF is the key to obtaining good agreement of the residual flexibility values.

  13. Comparative analysis of the structure of carbon materials relevant in combustion.

    PubMed

    Apicella, B; Barbella, R; Ciajolo, A; Tregrossi, A

    2003-06-01

    The determination of the structure of carbon materials is an analytical problem that join the research scientific communities involved in the chemical characterization of heavy fuel-derived products (heavy fuel oils, coal-derived fuels, shale oil, etc.) and of carbon materials (polycyclic aromatic compounds, tar, soot) produced in many combustion processes. The knowledge of the structure of these "difficult" fuels and of the carbon materials produced by incomplete combustion is relevant to research for the best low-environmental impact operation of combustion systems; but an array of many analytical and spectroscopic tools are necessary, and often not sufficient, to attempt the characterization of such complex products and in particular to determine the distribution of molecular masses. In this paper the size exclusion chromatography using N-methyl-pyrrolidinone as eluent has been applied for the characterization of different carbon materials starting from typical carbon species, commercially available like polyacenaphthylene, carbon black, naphthalene pitch up to combustion products like soot and soot extract collected in fuel-rich combustion systems. Two main fractions were detected, separated and molecular weights (MWs) determined by comparison with polystyrene standards: a first fraction consisted of particles with very large molecular masses (>100000 u); a second fraction consisted of species in a relatively small MW range (200-600 u). The distribution of these fractions changes in dependence on the carbon sample characteristics. Fluorescence spectroscopy applied on the fractions separated by size-exclusion chromatography has been used and comparatively interpreted giving indications on the differences and similarities in chemical structure of such different materials.

  14. Strategies for the Assessment of Metabolic Profiles of Steroid Hormones in View of Diagnostics and Drug Monitoring: Analytical Problems and Challenges.

    PubMed

    Plenis, Alina; Oledzka, Ilona; Kowalski, Piotr; Baczek, Tomasz

    2016-01-01

    During the last few years there has been a growing interest in research focused on the metabolism of steroid hormones despite that the study of metabolic hormone pathways is still a difficult and demanding task because of low steroid concentrations and a complexity of the analysed matrices. Thus, there has been an increasing interest in the development of new, more selective and sensitive methods for monitoring these compounds in biological samples. A lot of bibliographic databases for world research literature were structurally searched using selected review question and inclusion/exclusion criteria. Next, the reports of the highest quality were selected using standard tools (181) and they were described to evaluate the advantages and limitations of different approaches in the measurements of the steroids and their metabolites. The overview of the analytical challenges, development of methods used in the assessment of the metabolic pathways of steroid hormones, and the priorities for future research with a special consideration for liquid chromatography (LC) and capillary electrophoresis (CE) techniques have been presented. Moreover, many LC and CE applications in pharmacological and psychological studies as well as endocrinology and sports medicine, taking into account the recent progress in the area of the metabolic profiling of steroids, have been critically discussed. The latest reports show that LC systems coupled with mass spectrometry have the predominant position in the research of steroid profiles. Moreover, CE techniques are going to gain a prominent position in the diagnosis of hormone levels in the near future.

  15. Morphing Compression Garments for Space Medicine and Extravehicular Activity Using Active Materials.

    PubMed

    Holschuh, Bradley T; Newman, Dava J

    2016-02-01

    Compression garments tend to be difficult to don/doff, due to their intentional function of squeezing the wearer. This is especially true for compression garments used for space medicine and for extravehicular activity (EVA). We present an innovative solution to this problem by integrating shape changing materials-NiTi shape memory alloy (SMA) coil actuators formed into modular, 3D-printed cartridges-into compression garments to produce garments capable of constricting on command. A parameterized, 2-spring analytic counterpressure model based on 12 garment and material inputs was developed to inform garment design. A methodology was developed for producing novel SMA cartridge systems to enable active compression garment construction. Five active compression sleeve prototypes were manufactured and tested: each sleeve was placed on a rigid cylindrical object and counterpressure was measured as a function of spatial location and time before, during, and after the application of a step voltage input. Controllable active counterpressures were measured up to 34.3 kPa, exceeding the requirement for EVA life support (29.6 kPa). Prototypes which incorporated fabrics with linear properties closely matched analytic model predictions (4.1%/-10.5% error in passive/active pressure predictions); prototypes using nonlinear fabrics did not match model predictions (errors >100%). Pressure non-uniformities were observed due to friction and the rigid SMA cartridge structure. To our knowledge this is the first demonstration of controllable compression technology incorporating active materials, a novel contribution to the field of compression garment design. This technology could lead to easy-to-don compression garments with widespread space and terrestrial applications.

  16. Unusual analyte-matrix adduct ions and mechanism of their formation in MALDI TOF MS of benzene-1,3,5-tricarboxamide and urea compounds.

    PubMed

    Lou, Xianwen; Fransen, Michel; Stals, Patrick J M; Mes, Tristan; Bovee, Ralf; van Dongen, Joost J L; Meijer, E W

    2013-09-01

    Analyte-matrix adducts are normally absent under typical matrix assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS) conditions. Interestingly, though, in the analysis of several types of organic compounds synthesized in our laboratory, analyte-matrix adduct ion peaks were always recorded when common MALDI matrices such as 4-hydroxy-α-cyanocinnamic acid (CHCA) were used. These compounds are mainly those with a benzene-1,3,5-tricarboxamide (BTA) or urea moiety, which are important building blocks to make new functional supramolecular materials. The possible mechanism of the adduct formation was investigated. A shared feature of the compounds studied is that they can form intermolecular hydrogen bonding with matrices like CHCA. The intermolecular hydrogen bonding will make the association between analyte ions and matrix molecules stronger. As a result, the analyte ions and matrix molecules in MALDI clusters will become more difficult to be separated from each other. Furthermore, it was found that analyte ions were mainly adducted with matrix salts, which is probably due to the much lower volatility of the salts compared with that of their corresponding matrix acids. It seems that the analyte-matrix adduct formation for our compounds are caused by the incomplete evaporation of matrix molecules from the MALDI clusters because of the combined effects of enhanced intermolecular interaction between analyte-matrix and of the low volatility of matrix salts. Based on these findings, strategies to suppress the analyte-matrix adduction are briefly discussed. In return, the positive results of using these strategies support the proposed mechanism of the analyte-matrix adduct formation.

  17. Causal Inference in Educational Effectiveness Research: A Comparison of Three Methods to Investigate Effects of Homework on Student Achievement

    ERIC Educational Resources Information Center

    Gustafsson, Jan-Eric

    2013-01-01

    In educational effectiveness research, it frequently has proven difficult to make credible inferences about cause and effect relations. The article first identifies the main categories of threats to valid causal inference from observational data, and discusses designs and analytic approaches which protect against them. With the use of data from 22…

  18. Illustrating the Steady-State Condition and the Single-Molecule Kinetic Method with the NMDA Receptor

    ERIC Educational Resources Information Center

    Kosman, Daniel J.

    2009-01-01

    The steady-state is a fundamental aspect of biochemical pathways in cells; indeed, the concept of steady-state is a definition of life itself. In a simple enzyme kinetic scheme, the steady-state condition is easy to define analytically but experimentally often difficult to capture because of its evanescent quality; the initial, constant velocity…

  19. Using a Data Mining Approach to Develop a Student Engagement-Based Institutional Typology. IR Applications, Volume 18, February 8, 2009

    ERIC Educational Resources Information Center

    Luan, Jing; Zhao, Chun-Mei; Hayek, John C.

    2009-01-01

    Data mining provides both systematic and systemic ways to detect patterns of student engagement among students at hundreds of institutions. Using traditional statistical techniques alone, the task would be significantly difficult--if not impossible--considering the size and complexity in both data and analytical approaches necessary for this…

  20. Experimental research on mathematical modelling and unconventional control of clinker kiln in cement plants

    NASA Astrophysics Data System (ADS)

    Rusu-Anghel, S.

    2017-01-01

    Analytical modeling of the flow of manufacturing process of the cement is difficult because of their complexity and has not resulted in sufficiently precise mathematical models. In this paper, based on a statistical model of the process and using the knowledge of human experts, was designed a fuzzy system for automatic control of clinkering process.

Top