Sample records for accurate analytic description

  1. An accurate analytic description of neutrino oscillations in matter

    NASA Astrophysics Data System (ADS)

    Akhmedov, E. Kh.; Niro, Viviana

    2008-12-01

    A simple closed-form analytic expression for the probability of two-flavour neutrino oscillations in a matter with an arbitrary density profile is derived. Our formula is based on a perturbative expansion and allows an easy calculation of higher order corrections. The expansion parameter is small when the density changes relatively slowly along the neutrino path and/or neutrino energy is not very close to the Mikheyev-Smirnov-Wolfenstein (MSW) resonance energy. Our approximation is not equivalent to the adiabatic approximation and actually goes beyond it. We demonstrate the validity of our results using a few model density profiles, including the PREM density profile of the Earth. It is shown that by combining the results obtained from the expansions valid below and above the MSW resonance one can obtain a very good description of neutrino oscillations in matter in the entire energy range, including the resonance region.

  2. High precision analytical description of the allowed β spectrum shape

    NASA Astrophysics Data System (ADS)

    Hayen, Leendert; Severijns, Nathal; Bodek, Kazimierz; Rozpedzik, Dagmara; Mougeot, Xavier

    2018-01-01

    A fully analytical description of the allowed β spectrum shape is given in view of ongoing and planned measurements. Its study forms an invaluable tool in the search for physics beyond the standard electroweak model and the weak magnetism recoil term. Contributions stemming from finite size corrections, mass effects, and radiative corrections are reviewed. Particular focus is placed on atomic and chemical effects, where the existing description is extended and analytically provided. The effects of QCD-induced recoil terms are discussed, and cross-checks were performed for different theoretical formalisms. Special attention was given to a comparison of the treatment of nuclear structure effects in different formalisms. Corrections were derived for both Fermi and Gamow-Teller transitions, and methods of analytical evaluation thoroughly discussed. In its integrated form, calculated f values were in agreement with the most precise numerical results within the aimed for precision. The need for an accurate evaluation of weak magnetism contributions was stressed, and the possible significance of the oft-neglected induced pseudoscalar interaction was noted. Together with improved atomic corrections, an analytical description was presented of the allowed β spectrum shape accurate to a few parts in 10-4 down to 1 keV for low to medium Z nuclei, thereby extending the work by previous authors by nearly an order of magnitude.

  3. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  4. Accurate mass measurements and their appropriate use for reliable analyte identification.

    PubMed

    Godfrey, A Ruth; Brenton, A Gareth

    2012-09-01

    Accurate mass instrumentation is becoming increasingly available to non-expert users. This data can be mis-used, particularly for analyte identification. Current best practice in assigning potential elemental formula for reliable analyte identification has been described with modern informatic approaches to analyte elucidation, including chemometric characterisation, data processing and searching using facilities such as the Chemical Abstracts Service (CAS) Registry and Chemspider.

  5. Accurate Estimate of Some Propagation Characteristics for the First Higher Order Mode in Graded Index Fiber with Simple Analytic Chebyshev Method

    NASA Astrophysics Data System (ADS)

    Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas

    2013-03-01

    Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.

  6. Accurate analytical modeling of junctionless DG-MOSFET by green's function approach

    NASA Astrophysics Data System (ADS)

    Nandi, Ashutosh; Pandey, Nilesh

    2017-11-01

    An accurate analytical model of Junctionless double gate MOSFET (JL-DG-MOSFET) in the subthreshold regime of operation is developed in this work using green's function approach. The approach considers 2-D mixed boundary conditions and multi-zone techniques to provide an exact analytical solution to 2-D Poisson's equation. The Fourier coefficients are calculated correctly to derive the potential equations that are further used to model the channel current and subthreshold slope of the device. The threshold voltage roll-off is computed from parallel shifts of Ids-Vgs curves between the long channel and short-channel devices. It is observed that the green's function approach of solving 2-D Poisson's equation in both oxide and silicon region can accurately predict channel potential, subthreshold current (Isub), threshold voltage (Vt) roll-off and subthreshold slope (SS) of both long & short channel devices designed with different doping concentrations and higher as well as lower tsi/tox ratio. All the analytical model results are verified through comparisons with TCAD Sentaurus simulation results. It is observed that the model matches quite well with TCAD device simulations.

  7. Petermann I and II spot size: Accurate semi analytical description involving Nelder-Mead method of nonlinear unconstrained optimization and three parameter fundamental modal field

    NASA Astrophysics Data System (ADS)

    Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal

    2013-01-01

    A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.

  8. Highly Accurate Analytical Approximate Solution to a Nonlinear Pseudo-Oscillator

    NASA Astrophysics Data System (ADS)

    Wu, Baisheng; Liu, Weijia; Lim, C. W.

    2017-07-01

    A second-order Newton method is presented to construct analytical approximate solutions to a nonlinear pseudo-oscillator in which the restoring force is inversely proportional to the dependent variable. The nonlinear equation is first expressed in a specific form, and it is then solved in two steps, a predictor and a corrector step. In each step, the harmonic balance method is used in an appropriate manner to obtain a set of linear algebraic equations. With only one simple second-order Newton iteration step, a short, explicit, and highly accurate analytical approximate solution can be derived. The approximate solutions are valid for all amplitudes of the pseudo-oscillator. Furthermore, the method incorporates second-order Taylor expansion in a natural way, and it is of significant faster convergence rate.

  9. Quo vadis, analytical chemistry?

    PubMed

    Valcárcel, Miguel

    2016-01-01

    This paper presents an open, personal, fresh approach to the future of Analytical Chemistry in the context of the deep changes Science and Technology are anticipated to experience. Its main aim is to challenge young analytical chemists because the future of our scientific discipline is in their hands. A description of not completely accurate overall conceptions of our discipline, both past and present, to be avoided is followed by a flexible, integral definition of Analytical Chemistry and its cornerstones (viz., aims and objectives, quality trade-offs, the third basic analytical reference, the information hierarchy, social responsibility, independent research, transfer of knowledge and technology, interfaces to other scientific-technical disciplines, and well-oriented education). Obsolete paradigms, and more accurate general and specific that can be expected to provide the framework for our discipline in the coming years are described. Finally, the three possible responses of analytical chemists to the proposed changes in our discipline are discussed.

  10. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers.

    PubMed

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-10-29

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.

  11. Net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van de Geijn, J.; Fraass, B.A.

    The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from /sup 60/Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small numbermore » of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.« less

  12. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics

    PubMed Central

    2016-01-01

    intervals, or a confusion matrix. Conclusions IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise. PMID:27729304

  13. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    PubMed

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    . IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise.

  14. The net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR.

    PubMed

    van de Geijn, J; Fraass, B A

    1984-01-01

    The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from 60Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small number of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.

  15. Development and application of accurate analytical models for single active electron potentials

    NASA Astrophysics Data System (ADS)

    Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas

    2015-05-01

    The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).

  16. Highly accurate analytic formulae for projectile motion subjected to quadratic drag

    NASA Astrophysics Data System (ADS)

    Turkyilmazoglu, Mustafa

    2016-05-01

    The classical phenomenon of motion of a projectile fired (thrown) into the horizon through resistive air charging a quadratic drag onto the object is revisited in this paper. No exact solution is known that describes the full physical event under such an exerted resistance force. Finding elegant analytical approximations for the most interesting engineering features of dynamical behavior of the projectile is the principal target. Within this purpose, some analytical explicit expressions are derived that accurately predict the maximum height, its arrival time as well as the flight range of the projectile at the highest ascent. The most significant property of the proposed formulas is that they are not restricted to the initial speed and firing angle of the object, nor to the drag coefficient of the medium. In combination with the available approximations in the literature, it is possible to gain information about the flight and complete the picture of a trajectory with high precision, without having to numerically simulate the full governing equations of motion.

  17. 40 CFR 91.414 - Raw gaseous exhaust sampling and analytical system description.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Raw gaseous exhaust sampling and... Gaseous Exhaust Test Procedures § 91.414 Raw gaseous exhaust sampling and analytical system description... the component systems. (g) The following requirements must be incorporated in each system used for raw...

  18. 40 CFR 91.414 - Raw gaseous exhaust sampling and analytical system description.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Raw gaseous exhaust sampling and... Gaseous Exhaust Test Procedures § 91.414 Raw gaseous exhaust sampling and analytical system description... the component systems. (g) The following requirements must be incorporated in each system used for raw...

  19. 40 CFR 90.421 - Dilute gaseous exhaust sampling and analytical system description.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... filter and HFID. Determine these gas temperatures by a temperature sensor located immediately upstream of... analytical system description. (a) General. The exhaust gas sampling system described in this section is...-CVS must conform to all of the requirements listed for the exhaust gas PDP-CVS in § 90.420 of this...

  20. 40 CFR 90.421 - Dilute gaseous exhaust sampling and analytical system description.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... filter and HFID. Determine these gas temperatures by a temperature sensor located immediately upstream of... analytical system description. (a) General. The exhaust gas sampling system described in this section is...-CVS must conform to all of the requirements listed for the exhaust gas PDP-CVS in § 90.420 of this...

  1. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  2. Analytical Description of the H/D Exchange Kinetic of Macromolecule.

    PubMed

    Kostyukevich, Yury; Kononikhin, Alexey; Popov, Igor; Nikolaev, Eugene

    2018-04-17

    We present the accurate analytical solution obtained for the system of rate equations describing the isotope exchange process for molecules containing an arbitrary number of equivalent labile atoms. The exact solution was obtained using Mathematica 7.0 software, and this solution has the form of the time-dependent Gaussian distribution. For the case when forward exchange considerably overlaps the back exchange, it is possible to estimate the activation energy of the reaction by obtaining a temperature dependence of the reaction degree. Using a previously developed approach for performing H/D exchange directly in the ESI source, we have estimated the activation energies for ions with different functional groups and they were found to be in a range 0.04-0.3 eV. Since the value of the activation energy depends on the type of functional group, the developed approach can have potential analytical applications for determining types of functional groups in complex mixtures, such as petroleum, humic substances, bio-oil, and so on.

  3. Modern analytical chemistry in the contemporary world

    NASA Astrophysics Data System (ADS)

    Šíma, Jan

    2016-12-01

    Students not familiar with chemistry tend to misinterpret analytical chemistry as some kind of the sorcery where analytical chemists working as modern wizards handle magical black boxes able to provide fascinating results. However, this approach is evidently improper and misleading. Therefore, the position of modern analytical chemistry among sciences and in the contemporary world is discussed. Its interdisciplinary character and the necessity of the collaboration between analytical chemists and other experts in order to effectively solve the actual problems of the human society and the environment are emphasized. The importance of the analytical method validation in order to obtain the accurate and precise results is highlighted. The invalid results are not only useless; they can often be even fatal (e.g., in clinical laboratories). The curriculum of analytical chemistry at schools and universities is discussed. It is referred to be much broader than traditional equilibrium chemistry coupled with a simple description of individual analytical methods. Actually, the schooling of analytical chemistry should closely connect theory and practice.

  4. Compact, accurate description of diagnostic neutral beam propagation and attenuation in a high temperature plasma for charge exchange recombination spectroscopy analysis.

    PubMed

    Bespamyatnov, Igor O; Rowan, William L; Granetz, Robert S

    2008-10-01

    Charge exchange recombination spectroscopy on Alcator C-Mod relies on the use of the diagnostic neutral beam injector as a source of neutral particles which penetrate deep into the plasma. It employs the emission resulting from the interaction of the beam atoms with fully ionized impurity ions. To interpret the emission from a given point in the plasma as the density of emitting impurity ions, the density of beam atoms must be known. Here, an analysis of beam propagation is described which yields the beam density profile throughout the beam trajectory from the neutral beam injector to the core of the plasma. The analysis includes the effects of beam formation, attenuation in the neutral gas surrounding the plasma, and attenuation in the plasma. In the course of this work, a numerical simulation and an analytical approximation for beam divergence are developed. The description is made sufficiently compact to yield accurate results in a time consistent with between-shot analysis.

  5. A Descriptive-Analytic Study of the Practice Field Behavior of a Winning Female Coach.

    ERIC Educational Resources Information Center

    Dodds, Patt; Rife, Frank

    A winning collegiate field hockey coach was observed across seventeen practice sessions through one complete competitive season. A category system for the event recording of verbal and nonverbal behaviors delivered to the team and to the sixteen individual players produced descriptive-analytic information about relative behavior frequencies for…

  6. Accurate Energies and Orbital Description in Semi-Local Kohn-Sham DFT

    NASA Astrophysics Data System (ADS)

    Lindmaa, Alexander; Kuemmel, Stephan; Armiento, Rickard

    2015-03-01

    We present our progress on a scheme in semi-local Kohn-Sham density-functional theory (KS-DFT) for improving the orbital description while still retaining the level of accuracy of the usual semi-local exchange-correlation (xc) functionals. DFT is a widely used tool for first-principles calculations of properties of materials. A given task normally requires a balance of accuracy and computational cost, which is well achieved with semi-local DFT. However, commonly used semi-local xc functionals have important shortcomings which often can be attributed to features of the corresponding xc potential. One shortcoming is an overly delocalized representation of localized orbitals. Recently a semi-local GGA-type xc functional was constructed to address these issues, however, it has the trade-off of lower accuracy of the total energy. We discuss the source of this error in terms of a surplus energy contribution in the functional that needs to be accounted for, and offer a remedy for this issue which formally stays within KS-DFT, and, which does not harshly increase the computational effort. The end result is a scheme that combines accurate total energies (e.g., relaxed geometries) with an improved orbital description (e.g., improved band structure).

  7. Analytic descriptions of cylindrical electromagnetic waves in a nonlinear medium

    PubMed Central

    Xiong, Hao; Si, Liu-Gang; Yang, Xiaoxue; Wu, Ying

    2015-01-01

    A simple but highly efficient approach for dealing with the problem of cylindrical electromagnetic waves propagation in a nonlinear medium is proposed based on an exact solution proposed recently. We derive an analytical explicit formula, which exhibiting rich interesting nonlinear effects, to describe the propagation of any amount of cylindrical electromagnetic waves in a nonlinear medium. The results obtained by using the present method are accurately concordant with the results of using traditional coupled-wave equations. As an example of application, we discuss how a third wave affects the sum- and difference-frequency generation of two waves propagation in the nonlinear medium. PMID:26073066

  8. Molecular acidity: An accurate description with information-theoretic approach in density functional reactivity theory.

    PubMed

    Cao, Xiaofang; Rong, Chunying; Zhong, Aiguo; Lu, Tian; Liu, Shubin

    2018-01-15

    Molecular acidity is one of the important physiochemical properties of a molecular system, yet its accurate calculation and prediction are still an unresolved problem in the literature. In this work, we propose to make use of the quantities from the information-theoretic (IT) approach in density functional reactivity theory and provide an accurate description of molecular acidity from a completely new perspective. To illustrate our point, five different categories of acidic series, singly and doubly substituted benzoic acids, singly substituted benzenesulfinic acids, benzeneseleninic acids, phenols, and alkyl carboxylic acids, have been thoroughly examined. We show that using IT quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy, information gain, Onicescu information energy, and relative Rényi entropy, one is able to simultaneously predict experimental pKa values of these different categories of compounds. Because of the universality of the quantities employed in this work, which are all density dependent, our approach should be general and be applicable to other systems as well. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. New analytical model for the ozone electronic ground state potential surface and accurate ab initio vibrational predictions at high energy range.

    PubMed

    Tyuterev, Vladimir G; Kochanov, Roman V; Tashkun, Sergey A; Holka, Filip; Szalay, Péter G

    2013-10-07

    An accurate description of the complicated shape of the potential energy surface (PES) and that of the highly excited vibration states is of crucial importance for various unsolved issues in the spectroscopy and dynamics of ozone and remains a challenge for the theory. In this work a new analytical representation is proposed for the PES of the ground electronic state of the ozone molecule in the range covering the main potential well and the transition state towards the dissociation. This model accounts for particular features specific to the ozone PES for large variations of nuclear displacements along the minimum energy path. The impact of the shape of the PES near the transition state (existence of the "reef structure") on vibration energy levels was studied for the first time. The major purpose of this work was to provide accurate theoretical predictions for ozone vibrational band centres at the energy range near the dissociation threshold, which would be helpful for understanding the very complicated high-resolution spectra and its analyses currently in progress. Extended ab initio electronic structure calculations were carried out enabling the determination of the parameters of a minimum energy path PES model resulting in a new set of theoretical vibrational levels of ozone. A comparison with recent high-resolution spectroscopic data on the vibrational levels gives the root-mean-square deviations below 1 cm(-1) for ozone band centres up to 90% of the dissociation energy. New ab initio vibrational predictions represent a significant improvement with respect to all previously available calculations.

  10. Developing an Emergency Physician Productivity Index Using Descriptive Health Analytics.

    PubMed

    Khalifa, Mohamed

    2015-01-01

    Emergency department (ED) crowding became a major barrier to receiving timely emergency care. At King Faisal Specialist Hospital and Research Center, Saudi Arabia, we identified variables and factors affecting crowding and performance to develop indicators to help evaluation and improvement. Measuring efficiency of work and activity of throughput processes; it was important to develop an ED physician productivity index. Data on all ED patients' encounters over the last six months of 2014 were retrieved and descriptive health analytics methods were used. Three variables were identified for their influence on productivity and performance; Number of Treated Patients per Physician, Patient Acuity Level and Treatment Time. The study suggested a formula to calculate the productivity index of each physician through dividing the Number of Treated Patients by Patient Acuity Level squared and Treatment Time to identify physicians with low productivity index and investigate causes and factors.

  11. Descriptive and analytical epidemiology of nasopharyngeal cancer.

    PubMed

    Hirayama, T

    1978-01-01

    Information concerning the descriptive and analytical epidemiology of NPC that has been reported mainly since the first international symposium on the subject in Singapore in 1964 are reviewed. NPC is rare in most countries in the world, with an age-adjusted incidence rate of less than 1 per 100,000, and the incidence rate is twice as high in males as in females. Chinese of southern origin have a uniquely high risk, the incidence rates per 100,000 being 10--20 in males and 5--10 in females. The greater the admixture of southern Chinese blood in a given ethnic group, the more likely it is that the NPC incidence rate in that group will be raised. The incidence in both sexes begins to rise after the ages of 20--24 and reaches a plateau at between 45 and 54. When the logarithm of mortality and morbidity is plotted against the logarithm of the age, the power of the age that provides the best fit to a straight line on a log-log graph is approximately two to four. These figures are lower than for other cancers. Seroepidemiological case-control studies indicate that both different birthplace and abnormal response to EBV antigen significantly enhance the risk for NPC; when these two factors are combined, the relative risk appears to rise further. The effect of other environmental chemicals, such as from cigarette smoking, shown to be significant in several retrospective studies, could explain in part epidemiological phenomena such as sex difference in incidence. The definitive reason for the uniquely high risk in southern Chinese should be further investigated by taking into account the interactions of host factors (birthplace, HLA, etc.) and environmental factors (EBV, chemical carcinogens including nitrosamines, excessive intake of salted fish, nutritional deficiencies, etc.).

  12. Accurate quantification of PGE2 in the polyposis in rat colon (Pirc) model by surrogate analyte-based UPLC-MS/MS.

    PubMed

    Yun, Changhong; Dashwood, Wan-Mohaiza; Kwong, Lawrence N; Gao, Song; Yin, Taijun; Ling, Qinglan; Singh, Rashim; Dashwood, Roderick H; Hu, Ming

    2018-01-30

    An accurate and reliable UPLC-MS/MS method is reported for the quantification of endogenous Prostaglandin E2 (PGE 2 ) in rat colonic mucosa and polyps. This method adopted the "surrogate analyte plus authentic bio-matrix" approach, using two different stable isotopic labeled analogs - PGE 2 -d9 as the surrogate analyte and PGE 2 -d4 as the internal standard. A quantitative standard curve was constructed with the surrogate analyte in colonic mucosa homogenate, and the method was successfully validated with the authentic bio-matrix. Concentrations of endogenous PGE 2 in both normal and inflammatory tissue homogenates were back-calculated based on the regression equation. Because of no endogenous interference on the surrogate analyte determination, the specificity was particularly good. By using authentic bio-matrix for validation, the matrix effect and exaction recovery are identically same for the quantitative standard curve and actual samples - this notably increased the assay accuracy. The method is easy, fast, robust and reliable for colon PGE 2 determination. This "surrogate analyte" approach was applied to measure the Pirc (an Apc-mutant rat kindred that models human FAP) mucosa and polyps PGE 2 , one of the strong biomarkers of colorectal cancer. A similar concept could be applied to endogenous biomarkers in other tissues. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. The X3LYP extended density functional for accurate descriptions of nonbond interactions, spin states, and thermochemical properties

    PubMed Central

    Xu, Xin; Goddard, William A.

    2004-01-01

    We derive the form for an exact exchange energy density for a density decaying with Gaussian-like behavior at long range. Based on this, we develop the X3LYP (extended hybrid functional combined with Lee–Yang–Parr correlation functional) extended functional for density functional theory to significantly improve the accuracy for hydrogen-bonded and van der Waals complexes while also improving the accuracy in heats of formation, ionization potentials, electron affinities, and total atomic energies [over the most popular and accurate method, B3LYP (Becke three-parameter hybrid functional combined with Lee–Yang–Parr correlation functional)]. X3LYP also leads to a good description of dipole moments, polarizabilities, and accurate excitation energies from s to d orbitals for transition metal atoms and ions. We suggest that X3LYP will be useful for predicting ligand binding in proteins and DNA. PMID:14981235

  14. Moving from Descriptive to Causal Analytics: Case Study of the Health Indicators Warehouse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, Jack C.; Shankar, Mallikarjun; Xu, Songhua

    The KDD community has described a multitude of methods for knowledge discovery on large datasets. We consider some of these methods and integrate them into an analyst s workflow that proceeds from the data-centric descriptive level to the model-centric causal level. Examples of the workflow are shown for the Health Indicators Warehouse, which is a public database for community health information that is a potent resource for conducting data science on a medium scale. We demonstrate the potential of HIW as a source of serious visual analytics efforts by showing correlation matrix visualizations, multivariate outlier analysis, multiple linear regression ofmore » Medicare costs, and scatterplot matrices for a broad set of health indicators. We conclude by sketching the first steps toward a causal dependence hypothesis.« less

  15. A knowledge-based potential with an accurate description of local interactions improves discrimination between native and near-native protein conformations.

    PubMed

    Ferrada, Evandro; Vergara, Ismael A; Melo, Francisco

    2007-01-01

    The correct discrimination between native and near-native protein conformations is essential for achieving accurate computer-based protein structure prediction. However, this has proven to be a difficult task, since currently available physical energy functions, empirical potentials and statistical scoring functions are still limited in achieving this goal consistently. In this work, we assess and compare the ability of different full atom knowledge-based potentials to discriminate between native protein structures and near-native protein conformations generated by comparative modeling. Using a benchmark of 152 near-native protein models and their corresponding native structures that encompass several different folds, we demonstrate that the incorporation of close non-bonded pairwise atom terms improves the discriminating power of the empirical potentials. Since the direct and unbiased derivation of close non-bonded terms from current experimental data is not possible, we obtained and used those terms from the corresponding pseudo-energy functions of a non-local knowledge-based potential. It is shown that this methodology significantly improves the discrimination between native and near-native protein conformations, suggesting that a proper description of close non-bonded terms is important to achieve a more complete and accurate description of native protein conformations. Some external knowledge-based energy functions that are widely used in model assessment performed poorly, indicating that the benchmark of models and the specific discrimination task tested in this work constitutes a difficult challenge.

  16. Analytical method for the accurate determination of tricothecenes in grains using LC-MS/MS: a comparison between MRM transition and MS3 quantitation.

    PubMed

    Lim, Chee Wei; Tai, Siew Hoon; Lee, Lin Min; Chan, Sheot Harn

    2012-07-01

    The current food crisis demands unambiguous determination of mycotoxin contamination in staple foods to achieve safer food for consumption. This paper describes the first accurate LC-MS/MS method developed to analyze tricothecenes in grains by applying multiple reaction monitoring (MRM) transition and MS(3) quantitation strategies in tandem. The tricothecenes are nivalenol, deoxynivalenol, deoxynivalenol-3-glucoside, fusarenon X, 3-acetyl-deoxynivalenol, 15-acetyldeoxynivalenol, diacetoxyscirpenol, and HT-2 and T-2 toxins. Acetic acid and ammonium acetate were used to convert the analytes into their respective acetate adducts and ammonium adducts under negative and positive MS polarity conditions, respectively. The mycotoxins were separated by reversed-phase LC in a 13.5-min run, ionized using electrospray ionization, and detected by tandem mass spectrometry. Analyte-specific mass-to-charge (m/z) ratios were used to perform quantitation under MRM transition and MS(3) (linear ion trap) modes. Three experiments were made for each quantitation mode and matrix in batches over 6 days for recovery studies. The matrix effect was investigated at concentration levels of 20, 40, 80, 120, 160, and 200 μg kg(-1) (n = 3) in 5 g corn flour and rice flour. Extraction with acetonitrile provided a good overall recovery range of 90-108% (n = 3) at three levels of spiking concentration of 40, 80, and 120 μg kg(-1). A quantitation limit of 2-6 μg kg(-1) was achieved by applying an MRM transition quantitation strategy. Under MS(3) mode, a quantitation limit of 4-10 μg kg(-1) was achieved. Relative standard deviations of 2-10% and 2-11% were reported for MRM transition and MS(3) quantitation, respectively. The successful utilization of MS(3) enabled accurate analyte fragmentation pattern matching and its quantitation, leading to the development of analytical methods in fields that demand both analyte specificity and fragmentation fingerprint-matching capabilities that are

  17. New hybrid voxelized/analytical primitive in Monte Carlo simulations for medical applications

    NASA Astrophysics Data System (ADS)

    Bert, Julien; Lemaréchal, Yannick; Visvikis, Dimitris

    2016-05-01

    Monte Carlo simulations (MCS) applied in particle physics play a key role in medical imaging and particle therapy. In such simulations, particles are transported through voxelized phantoms derived from predominantly patient CT images. However, such voxelized object representation limits the incorporation of fine elements, such as artificial implants from CAD modeling or anatomical and functional details extracted from other imaging modalities. In this work we propose a new hYbrid Voxelized/ANalytical primitive (YVAN) that combines both voxelized and analytical object descriptions within the same MCS, without the need to simultaneously run two parallel simulations, which is the current gold standard methodology. Given that YVAN is simply a new primitive object, it does not require any modifications on the underlying MC navigation code. The new proposed primitive was assessed through a first simple MCS. Results from the YVAN primitive were compared against an MCS using a pure analytical geometry and the layer mass geometry concept. A perfect agreement was found between these simulations, leading to the conclusion that the new hybrid primitive is able to accurately and efficiently handle phantoms defined by a mixture of voxelized and analytical objects. In addition, two application-based evaluation studies in coronary angiography and intra-operative radiotherapy showed that the use of YVAN was 6.5% and 12.2% faster than the layered mass geometry method, respectively, without any associated loss of accuracy. However, the simplification advantages and differences in computational time improvements obtained with YVAN depend on the relative proportion of the analytical and voxelized structures used in the simulation as well as the size and number of triangles used in the description of the analytical object meshes.

  18. An Accurate Analytic Approximation for Light Scattering by Non-absorbing Spherical Aerosol Particles

    NASA Astrophysics Data System (ADS)

    Lewis, E. R.

    2017-12-01

    The scattering of light by particles in the atmosphere is a ubiquitous and important phenomenon, with applications to numerous fields of science and technology. The problem of scattering of electromagnetic radiation by a uniform spherical particle can be solved by the method of Mie and Debye as a series of terms depending on the size parameter, x=2πr/λ, and the complex index of refraction, m. However, this solution does not provide insight into the dependence of the scattering on the radius of the particle, the wavelength, or the index of refraction, or how the scattering varies with relative humidity. Van de Hulst demonstrated that the scattering efficiency (the scattering cross section divided by the geometric cross section) of a non-absorbing sphere, over a wide range of particle sizes of atmospheric importance, depends not on x and m separately, but on the quantity 2x(m-1); this is the basis for the anomalous diffraction approximation. Here an analytic approximation for the scattering efficiency of a non-absorbing spherical particle is presented in terms of this new quantity that is accurate over a wide range of particle sizes of atmospheric importance and which readily displays the dependences of the scattering efficiency on particle radius, index of refraction, and wavelength. For an aerosol for which the particle size distribution is parameterized as a gamma function, this approximation also yields analytical results for the scattering coefficient and for the Ångström exponent, with the dependences of scattering properties on wavelength and index of refraction clearly displayed. This approximation provides insight into the dependence of light scattering properties on factors such as relative humidity, readily enables conversion of scattering from one index of refraction to another, and demonstrates the conditions under which the aerosol index (the product of the aerosol optical depth and the Ångström exponent) is a useful proxy for the number of cloud

  19. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    NASA Astrophysics Data System (ADS)

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  20. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  1. From The Cover: The X3LYP extended density functional for accurate descriptions of nonbond interactions, spin states, and thermochemical properties.

    PubMed

    Xu, Xin; Goddard, William A

    2004-03-02

    We derive the form for an exact exchange energy density for a density decaying with Gaussian-like behavior at long range. Based on this, we develop the X3LYP (extended hybrid functional combined with Lee-Yang-Parr correlation functional) extended functional for density functional theory to significantly improve the accuracy for hydrogen-bonded and van der Waals complexes while also improving the accuracy in heats of formation, ionization potentials, electron affinities, and total atomic energies [over the most popular and accurate method, B3LYP (Becke three-parameter hybrid functional combined with Lee-Yang-Parr correlation functional)]. X3LYP also leads to a good description of dipole moments, polarizabilities, and accurate excitation energies from s to d orbitals for transition metal atoms and ions. We suggest that X3LYP will be useful for predicting ligand binding in proteins and DNA.

  2. From The Cover: The X3LYP extended density functional for accurate descriptions of nonbond interactions, spin states, and thermochemical properties

    NASA Astrophysics Data System (ADS)

    Xu, Xin; Goddard, William A., III

    2004-03-01

    We derive the form for an exact exchange energy density for a density decaying with Gaussian-like behavior at long range. Based on this, we develop the X3LYP (extended hybrid functional combined with Lee-Yang-Parr correlation functional) extended functional for density functional theory to significantly improve the accuracy for hydrogen-bonded and van der Waals complexes while also improving the accuracy in heats of formation, ionization potentials, electron affinities, and total atomic energies [over the most popular and accurate method, B3LYP (Becke three-parameter hybrid functional combined with Lee-Yang-Parr correlation functional)]. X3LYP also leads to a good description of dipole moments, polarizabilities, and accurate excitation energies from s to d orbitals for transition metal atoms and ions. We suggest that X3LYP will be useful for predicting ligand binding in proteins and DNA.

  3. Distinguishing Features and Similarities Between Descriptive Phenomenological and Qualitative Description Research.

    PubMed

    Willis, Danny G; Sullivan-Bolyai, Susan; Knafl, Kathleen; Cohen, Marlene Z

    2016-09-01

    Scholars who research phenomena of concern to the discipline of nursing are challenged with making wise choices about different qualitative research approaches. Ultimately, they want to choose an approach that is best suited to answer their research questions. Such choices are predicated on having made distinctions between qualitative methodology, methods, and analytic frames. In this article, we distinguish two qualitative research approaches widely used for descriptive studies: descriptive phenomenological and qualitative description. Providing a clear basis that highlights the distinguishing features and similarities between descriptive phenomenological and qualitative description research will help students and researchers make more informed choices in deciding upon the most appropriate methodology in qualitative research. We orient the reader to distinguishing features and similarities associated with each approach and the kinds of research questions descriptive phenomenological and qualitative description research address. © The Author(s) 2016.

  4. The need for accurate total cholesterol measurement. Recommended analytical goals, current state of reliability, and guidelines for better determinations.

    PubMed

    Naito, H K

    1989-03-01

    We have approached a dawn of a new era in detection, evaluation, treatment, and monitoring of individuals with elevated blood cholesterol levels who are at increased risk for CHD. The NHLBI's National Cholesterol Education Program will be the major force underlying this national awareness program, which is dependent on the clinical laboratories providing reliable data. Precision or reproducibility of results is not a problem for most of the laboratories, but accuracy is a major concern. Both the manufacturers and laboratorians need to standardize the measurement for cholesterol so that the accuracy base is traceable to the NCCLS NRS/CHOL. The manufacturers need to adopt a uniform policy that will ensure that the values assigned to calibration, quality control, and quality assurance or survey materials are accurate and traceable to the NCCLS/CHOL. Since, at present, there are some limitations of these materials caused by matrix effects, laboratories are encouraged to use the CDC-NHLBI National Reference Laboratory Network to evaluate and monitor their ability to measure patient blood cholesterol levels accurately. Major areas of analytical problems are identified and general, as well as specific, recommendations are provided to help ensure reliable measurement of cholesterol in patient specimens.

  5. Analytical Description of Ascending Motion of Rockets in the Atmosphere

    ERIC Educational Resources Information Center

    Rodrigues, H.; de Pinho, M. O.; Portes, D., Jr.; Santiago, A.

    2009-01-01

    In continuation of a previous work, we present an analytic study of ascending vertical motion of a rocket subjected to a quadratic drag for the case where the mass-variation law is a linear function of time. We discuss the detailed analytical solution of the model differential equations in closed form. Examples of application are presented and…

  6. Robust Accurate Non-Invasive Analyte Monitor

    DOEpatents

    Robinson, Mark R.

    1998-11-03

    An improved method and apparatus for determining noninvasively and in vivo one or more unknown values of a known characteristic, particularly the concentration of an analyte in human tissue. The method includes: (1) irradiating the tissue with infrared energy (400 nm-2400 nm) having at least several wavelengths in a given range of wavelengths so that there is differential absorption of at least some of the wavelengths by the tissue as a function of the wavelengths and the known characteristic, the differential absorption causeing intensity variations of the wavelengths incident from the tissue; (2) providing a first path through the tissue; (3) optimizing the first path for a first sub-region of the range of wavelengths to maximize the differential absorption by at least some of the wavelengths in the first sub-region; (4) providing a second path through the tissue; and (5) optimizing the second path for a second sub-region of the range, to maximize the differential absorption by at least some of the wavelengths in the second sub-region. In the preferred embodiment a third path through the tissue is provided for, which path is optimized for a third sub-region of the range. With this arrangement, spectral variations which are the result of tissue differences (e.g., melanin and temperature) can be reduced. At least one of the paths represents a partial transmission path through the tissue. This partial transmission path may pass through the nail of a finger once and, preferably, twice. Also included are apparatus for: (1) reducing the arterial pulsations within the tissue; and (2) maximizing the blood content i the tissue.

  7. Realistic Analytical Polyhedral MRI Phantoms

    PubMed Central

    Ngo, Tri M.; Fung, George S. K.; Han, Shuo; Chen, Min; Prince, Jerry L.; Tsui, Benjamin M. W.; McVeigh, Elliot R.; Herzka, Daniel A.

    2015-01-01

    Purpose Analytical phantoms have closed form Fourier transform expressions and are used to simulate MRI acquisitions. Existing 3D analytical phantoms are unable to accurately model shapes of biomedical interest. It is demonstrated that polyhedral analytical phantoms have closed form Fourier transform expressions and can accurately represent 3D biomedical shapes. Theory The derivations of the Fourier transform of a polygon and polyhedron are presented. Methods The Fourier transform of a polyhedron was implemented and its accuracy in representing faceted and smooth surfaces was characterized. Realistic anthropomorphic polyhedral brain and torso phantoms were constructed and their use in simulated 3D/2D MRI acquisitions was described. Results Using polyhedra, the Fourier transform of faceted shapes can be computed to within machine precision. Smooth surfaces can be approximated with increasing accuracy by increasing the number of facets in the polyhedron; the additional accumulated numerical imprecision of the Fourier transform of polyhedra with many faces remained small. Simulations of 3D/2D brain and 2D torso cine acquisitions produced realistic reconstructions free of high frequency edge aliasing as compared to equivalent voxelized/rasterized phantoms. Conclusion Analytical polyhedral phantoms are easy to construct and can accurately simulate shapes of biomedical interest. PMID:26479724

  8. Analytical Applications of NMR: Summer Symposium on Analytical Chemistry.

    ERIC Educational Resources Information Center

    Borman, Stuart A.

    1982-01-01

    Highlights a symposium on analytical applications of nuclear magnetic resonance spectroscopy (NMR), discussing pulse Fourier transformation technique, two-dimensional NMR, solid state NMR, and multinuclear NMR. Includes description of ORACLE, an NMR data processing system at Syracuse University using real-time color graphics, and algorithms for…

  9. Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics.

    PubMed

    Stolper, Charles D; Perer, Adam; Gotz, David

    2014-12-01

    As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.

  10. Finding accurate frontiers: A knowledge-intensive approach to relational learning

    NASA Technical Reports Server (NTRS)

    Pazzani, Michael; Brunk, Clifford

    1994-01-01

    An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.

  11. Towards a more accurate microscopic description of the moving contact line problem - incorporating nonlocal effects through a statistical mechanics framework

    NASA Astrophysics Data System (ADS)

    Nold, Andreas; Goddard, Ben; Sibley, David; Kalliadasis, Serafim

    2014-03-01

    Multiscale effects play a predominant role in wetting phenomena such as the moving contact line. An accurate description is of paramount interest for a wide range of industrial applications, yet it is a matter of ongoing research, due to the difficulty of incorporating different physical effects in one model. Important small-scale phenomena are corrections to the attractive fluid-fluid and wall-fluid forces in inhomogeneous density distributions, which often previously have been accounted for by the disjoining pressure in an ad-hoc manner. We systematically derive a novel model for the description of a single-component liquid-vapor multiphase system which inherently incorporates these nonlocal effects. This derivation, which is inspired by statistical mechanics in the framework of colloidal density functional theory, is critically discussed with respect to its assumptions and restrictions. The model is then employed numerically to study a moving contact line of a liquid fluid displacing its vapor phase. We show how nonlocal physical effects are inherently incorporated by the model and describe how classical macroscopic results for the contact line motion are retrieved. We acknowledge financial support from ERC Advanced Grant No. 247031 and Imperial College through a DTG International Studentship.

  12. Fluorescence polarization immunoassays for rapid, accurate, and sensitive determination of mycotoxins

    USDA-ARS?s Scientific Manuscript database

    Analytical methods for the determination of mycotoxins in foods are commonly based on chromatographic techniques (GC, HPLC or LC-MS). Although these methods permit a sensitive and accurate determination of the analyte, they require skilled personnel and are time-consuming, expensive, and unsuitable ...

  13. Variational Trajectory Optimization Tool Set: Technical description and user's manual

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.

    1993-01-01

    The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.

  14. Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description

    NASA Astrophysics Data System (ADS)

    Bauer, Sebastian; Mathias, Gerald; Tavan, Paul

    2014-03-01

    We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ɛ(r) is close to one everywhere inside the protein. The Gaussian widths σi of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σi. A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by

  15. Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description.

    PubMed

    Bauer, Sebastian; Mathias, Gerald; Tavan, Paul

    2014-03-14

    We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ(i) of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ(i). A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by

  16. Combining Graphical and Analytical Methods with Molecular Simulations To Analyze Time-Resolved FRET Measurements of Labeled Macromolecules Accurately

    PubMed Central

    2017-01-01

    Förster resonance energy transfer (FRET) measurements from a donor, D, to an acceptor, A, fluorophore are frequently used in vitro and in live cells to reveal information on the structure and dynamics of DA labeled macromolecules. Accurate descriptions of FRET measurements by molecular models are complicated because the fluorophores are usually coupled to the macromolecule via flexible long linkers allowing for diffusional exchange between multiple states with different fluorescence properties caused by distinct environmental quenching, dye mobilities, and variable DA distances. It is often assumed for the analysis of fluorescence intensity decays that DA distances and D quenching are uncorrelated (homogeneous quenching by FRET) and that the exchange between distinct fluorophore states is slow (quasistatic). This allows us to introduce the FRET-induced donor decay, εD(t), a function solely depending on the species fraction distribution of the rate constants of energy transfer by FRET, for a convenient joint analysis of fluorescence decays of FRET and reference samples by integrated graphical and analytical procedures. Additionally, we developed a simulation toolkit to model dye diffusion, fluorescence quenching by the protein surface, and FRET. A benchmark study with simulated fluorescence decays of 500 protein structures demonstrates that the quasistatic homogeneous model works very well and recovers for single conformations the average DA distances with an accuracy of < 2%. For more complex cases, where proteins adopt multiple conformations with significantly different dye environments (heterogeneous case), we introduce a general analysis framework and evaluate its power in resolving heterogeneities in DA distances. The developed fast simulation methods, relying on Brownian dynamics of a coarse-grained dye in its sterically accessible volume, allow us to incorporate structural information in the decay analysis for heterogeneous cases by relating dye states

  17. Visual analytics of brain networks.

    PubMed

    Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming

    2012-05-15

    Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Effective description of domain wall strings

    NASA Astrophysics Data System (ADS)

    Rodrigues, Davi R.; Abanov, Ar.; Sinova, J.; Everschor-Sitte, K.

    2018-04-01

    The analysis of domain wall dynamics is often simplified to one-dimensional physics. For domain walls in thin films, more realistic approaches require the description as two-dimensional objects. This includes the study of vortices and curvatures along the domain walls as well as the influence of boundary effects. Here we provide a theory in terms of soft modes that allows us to analytically study the physics of extended domain walls and their stability. By considering irregularly shaped skyrmions as closed domain walls, we analyze their plasticity and compare their dynamics with those of circular skyrmions. Our theory directly provides an analytical description of the excitation modes of magnetic skyrmions, previously accessible only through sophisticated micromagnetic numerical calculations and spectral analysis. These analytical expressions provide the scaling behavior of the different physics on parameters that experiments can test.

  19. Two Approaches in the Lunar Libration Theory: Analytical vs. Numerical Methods

    NASA Astrophysics Data System (ADS)

    Petrova, Natalia; Zagidullin, Arthur; Nefediev, Yurii; Kosulin, Valerii

    2016-10-01

    Observation of the physical libration of the Moon and the celestial bodies is one of the astronomical methods to remotely evaluate the internal structure of a celestial body without using expensive space experiments. Review of the results obtained due to the physical libration study, is presented in the report.The main emphasis is placed on the description of successful lunar laser ranging for libration determination and on the methods of simulating the physical libration. As a result, estimation of the viscoelastic and dissipative properties of the lunar body, of the lunar core parameters were done. The core's existence was confirmed by the recent reprocessing of seismic data Apollo missions. Attention is paid to the physical interpretation of the phenomenon of free libration and methods of its determination.A significant part of the report is devoted to describing the practical application of the most accurate to date the analytical tables of lunar libration built by comprehensive analytical processing of residual differences obtained when comparing the long-term series of laser observations with numerical ephemeris DE421 [1].In general, the basic outline of the report reflects the effectiveness of two approaches in the libration theory - numerical and analytical solution. It is shown that the two approaches complement each other for the study of the Moon in different aspects: numerical approach provides high accuracy of the theory necessary for adequate treatment of modern high-accurate observations and the analytic approach allows you to see the essence of the various kind manifestations in the lunar rotation, predict and interpret the new effects in observations of physical libration [2].[1] Rambaux, N., J. G. Williams, 2011, The Moon's physical librations and determination of their free modes, Celest. Mech. Dyn. Astron., 109, 85-100.[2] Petrova N., A. Zagidullin, Yu. Nefediev. Analysis of long-periodic variations of lunar libration parameters on the basis of

  20. Teaching Analytical Thinking

    ERIC Educational Resources Information Center

    Behn, Robert D.; Vaupel, James W.

    1976-01-01

    Description of the philosophy and general nature of a course at Drake University that emphasizes basic concepts of analytical thinking, including think, decompose, simplify, specify, and rethink problems. Some sample homework exercises are included. The journal is available from University of California Press, Berkeley, California 94720.…

  1. Sequentially Simulated Outcomes: Kind Experience versus Nontransparent Description

    ERIC Educational Resources Information Center

    Hogarth, Robin M.; Soyer, Emre

    2011-01-01

    Recently, researchers have investigated differences in decision making based on description and experience. We address the issue of when experience-based judgments of probability are more accurate than are those based on description. If description is well understood ("transparent") and experience is misleading ("wicked"), it…

  2. Soft Biometrics; Human Identification Using Comparative Descriptions.

    PubMed

    Reid, Daniel A; Nixon, Mark S; Stevenage, Sarah V

    2014-06-01

    Soft biometrics are a new form of biometric identification which use physical or behavioral traits that can be naturally described by humans. Unlike other biometric approaches, this allows identification based solely on verbal descriptions, bridging the semantic gap between biometrics and human description. To permit soft biometric identification the description must be accurate, yet conventional human descriptions comprising of absolute labels and estimations are often unreliable. A novel method of obtaining human descriptions will be introduced which utilizes comparative categorical labels to describe differences between subjects. This innovative approach has been shown to address many problems associated with absolute categorical labels-most critically, the descriptions contain more objective information and have increased discriminatory capabilities. Relative measurements of the subjects' traits can be inferred from comparative human descriptions using the Elo rating system. The resulting soft biometric signatures have been demonstrated to be robust and allow accurate recognition of subjects. Relative measurements can also be obtained from other forms of human representation. This is demonstrated using a support vector machine to determine relative measurements from gait biometric signatures-allowing retrieval of subjects from video footage by using human comparisons, bridging the semantic gap.

  3. Can Raters with Reduced Job Descriptive Information Provide Accurate Position Analysis Questionnaire (PAQ) Ratings?

    ERIC Educational Resources Information Center

    Friedman, Lee; Harvey, Robert J.

    1986-01-01

    Job-naive raters provided with job descriptive information made Position Analysis Questionnaire (PAQ) ratings which were validated against ratings of job analysts who were also job content experts. None of the reduced job descriptive information conditions enabled job-naive raters to obtain either acceptable levels of convergent validity with…

  4. Analytical Chemistry: A Literary Approach.

    ERIC Educational Resources Information Center

    Lucy, Charles A.

    2000-01-01

    Provides an anthology of references to descriptions of analytical chemistry techniques from history, popular fiction, and film which can be used to capture student interest and frame discussions of chemical techniques. (WRM)

  5. Analytical performance specifications for external quality assessment - definitions and descriptions.

    PubMed

    Jones, Graham R D; Albarede, Stephanie; Kesseler, Dagmar; MacKenzie, Finlay; Mammen, Joy; Pedersen, Morten; Stavelin, Anne; Thelen, Marc; Thomas, Annette; Twomey, Patrick J; Ventura, Emma; Panteghini, Mauro

    2017-06-27

    External Quality Assurance (EQA) is vital to ensure acceptable analytical quality in medical laboratories. A key component of an EQA scheme is an analytical performance specification (APS) for each measurand that a laboratory can use to assess the extent of deviation of the obtained results from the target value. A consensus conference held in Milan in 2014 has proposed three models to set APS and these can be applied to setting APS for EQA. A goal arising from this conference is the harmonisation of EQA APS between different schemes to deliver consistent quality messages to laboratories irrespective of location and the choice of EQA provider. At this time there are wide differences in the APS used in different EQA schemes for the same measurands. Contributing factors to this variation are that the APS in different schemes are established using different criteria, applied to different types of data (e.g. single data points, multiple data points), used for different goals (e.g. improvement of analytical quality; licensing), and with the aim of eliciting different responses from participants. This paper provides recommendations from the European Federation of Laboratory Medicine (EFLM) Task and Finish Group on Performance Specifications for External Quality Assurance Schemes (TFG-APSEQA) and on clear terminology for EQA APS. The recommended terminology covers six elements required to understand APS: 1) a statement on the EQA material matrix and its commutability; 2) the method used to assign the target value; 3) the data set to which APS are applied; 4) the applicable analytical property being assessed (i.e. total error, bias, imprecision, uncertainty); 5) the rationale for the selection of the APS; and 6) the type of the Milan model(s) used to set the APS. The terminology is required for EQA participants and other interested parties to understand the meaning of meeting or not meeting APS.

  6. A calibration method for fringe reflection technique based on the analytical phase-slope description

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong

    2018-05-01

    The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.

  7. Description of the HiMAT Tailored composite structure and laboratory measured vehicle shape under load

    NASA Technical Reports Server (NTRS)

    Monaghan, R. C.

    1981-01-01

    The aeroelastically tailored outer wing and canard of the highly maneuverable aircraft technology (HiMAT) vehicle are closely examined and a general description of the overall structure of the vehicle is provided. Test data in the form of laboratory measured twist under load and predicted twist from the HiMAT NASTRAN structural design program are compared. The results of this comparison indicate that the measured twist is generally less than the NASTRAN predicted twist. These discrepancies in twist predictions are attributed, at least in part, to the inability of current analytical composite materials programs to provide sufficiently accurate properties of matrix dominated laminates for input into structural programs such as NASTRAN.

  8. Insight solutions are correct more often than analytic solutions

    PubMed Central

    Salvi, Carola; Bricolo, Emanuela; Kounios, John; Bowden, Edward; Beeman, Mark

    2016-01-01

    How accurate are insights compared to analytical solutions? In four experiments, we investigated how participants’ solving strategies influenced their solution accuracies across different types of problems, including one that was linguistic, one that was visual and two that were mixed visual-linguistic. In each experiment, participants’ self-judged insight solutions were, on average, more accurate than their analytic ones. We hypothesised that insight solutions have superior accuracy because they emerge into consciousness in an all-or-nothing fashion when the unconscious solving process is complete, whereas analytic solutions can be guesses based on conscious, prematurely terminated, processing. This hypothesis is supported by the finding that participants’ analytic solutions included relatively more incorrect responses (i.e., errors of commission) than timeouts (i.e., errors of omission) compared to their insight responses. PMID:27667960

  9. Accurate color synthesis of three-dimensional objects in an image

    NASA Astrophysics Data System (ADS)

    Xin, John H.; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.

  10. An analytic description of electrodynamic dispersion in free-flow zone electrophoresis.

    PubMed

    Dutta, Debashis

    2015-07-24

    The present work analyzes the electrodynamic dispersion of sample streams in a free-flow zone electrophoresis (FFZE) chamber resulting due to partial or complete blockage of electroosmotic flow (EOF) across the channel width by the sidewalls of the conduit. This blockage of EOF has been assumed to generate a pressure-driven backflow in the transverse direction for maintaining flow balance in the system. A parallel-plate based FFZE device with the analyte stream located far away from the channel side regions has been considered to simplify the current analysis. Applying a method-of-moments formulation, an analytic expression was derived for the variance of the sample zone at steady state as a function of its position in the separation chamber under these conditions. It has been shown that the increase in stream broadening due to the electrodynamic dispersion phenomenon is additive to the contributions from molecular diffusion and sample injection, and simply modifies the coefficient for the hydrodynamic dispersion term for a fixed lateral migration distance of the sample stream. Moreover, this dispersion mechanism can dominate the overall spatial variance of analyte zones when a significant fraction of the EOF is blocked by the channel sidewalls. The analysis also shows that analyte streams do not undergo any hydrodynamic broadening due to unwanted pressure-driven cross-flows in an FFZE chamber in the absence of a transverse electric field. The noted results have been validated using Monte Carlo simulations which further demonstrate that while the sample concentration profile at the channel outlet approaches a Gaussian distribution only in FFZE chambers substantially longer than the product of the axial pressure-driven velocity and the characteristic diffusion time in the system, the spatial variance of the exiting analyte stream is well described by the Taylor-Aris dispersion limit even in analysis ducts much shorter than this length scale. Copyright © 2015

  11. Arbitrarily accurate twin composite π -pulse sequences

    NASA Astrophysics Data System (ADS)

    Torosov, Boyan T.; Vitanov, Nikolay V.

    2018-04-01

    We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .

  12. Analytic description of the frictionally engaged in-plane bending process incremental swivel bending (ISB)

    NASA Astrophysics Data System (ADS)

    Frohn, Peter; Engel, Bernd; Groth, Sebastian

    2018-05-01

    Kinematic forming processes shape geometries by the process parameters to achieve a more universal process utilizations regarding geometric configurations. The kinematic forming process Incremental Swivel Bending (ISB) bends sheet metal strips or profiles in plane. The sequence for bending an arc increment is composed of the steps clamping, bending, force release and feed. The bending moment is frictionally engaged by two clamping units in a laterally adjustable bending pivot. A minimum clamping force hindering the material from slipping through the clamping units is a crucial criterion to achieve a well-defined incremental arc. Therefore, an analytic description of a singular bent increment is developed in this paper. The bending moment is calculated by the uniaxial stress distribution over the profiles' width depending on the bending pivot's position. By a Coulomb' based friction model, necessary clamping force is described in dependence of friction, offset, dimensions of the clamping tools and strip thickness as well as material parameters. Boundaries for the uniaxial stress calculation are given in dependence of friction, tools' dimensions and strip thickness. The results indicate that changing the bending pivot to an eccentric position significantly affects the process' bending moment and, hence, clamping force, which is given in dependence of yield stress and hardening exponent. FE simulations validate the model with satisfactory accordance.

  13. Accurate adiabatic singlet-triplet gaps in atoms and molecules employing the third-order spin-flip algebraic diagrammatic construction scheme for the polarization propagator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lefrancois, Daniel; Dreuw, Andreas, E-mail: dreuw@uni-heidelberg.de; Rehn, Dirk R.

    For the calculation of adiabatic singlet-triplet gaps (STG) in diradicaloid systems the spin-flip (SF) variant of the algebraic diagrammatic construction (ADC) scheme for the polarization propagator in third order perturbation theory (SF-ADC(3)) has been applied. Due to the methodology of the SF approach the singlet and triplet states are treated on an equal footing since they are part of the same determinant subspace. This leads to a systematically more accurate description of, e.g., diradicaloid systems than with the corresponding non-SF single-reference methods. Furthermore, using analytical excited state gradients at ADC(3) level, geometry optimizations of the singlet and triplet states weremore » performed leading to a fully consistent description of the systems, leading to only small errors in the calculated STGs ranging between 0.6 and 2.4 kcal/mol with respect to experimental references.« less

  14. The Analytical Limits of Modeling Short Diffusion Timescales

    NASA Astrophysics Data System (ADS)

    Bradshaw, R. W.; Kent, A. J.

    2016-12-01

    Chemical and isotopic zoning in minerals is widely used to constrain the timescales of magmatic processes such as magma mixing and crystal residence, etc. via diffusion modeling. Forward modeling of diffusion relies on fitting diffusion profiles to measured compositional gradients. However, an individual measurement is essentially an average composition for a segment of the gradient defined by the spatial resolution of the analysis. Thus there is the potential for the analytical spatial resolution to limit the timescales that can be determined for an element of given diffusivity, particularly where the scale of the gradient approaches that of the measurement. Here we use a probabilistic modeling approach to investigate the effect of analytical spatial resolution on estimated timescales from diffusion modeling. Our method investigates how accurately the age of a synthetic diffusion profile can be obtained by modeling an "unknown" profile derived from discrete sampling of the synthetic compositional gradient at a given spatial resolution. We also include the effects of analytical uncertainty and the position of measurements relative to the diffusion gradient. We apply this method to the spatial resolutions of common microanalytical techniques (LA-ICP-MS, SIMS, EMP, NanoSIMS). Our results confirm that for a given diffusivity, higher spatial resolution gives access to shorter timescales, and that each analytical spacing has a minimum timescale, below which it overestimates the timescale. For example, for Ba diffusion in plagioclase at 750 °C timescales are accurate (within 20%) above 10, 100, 2,600, and 71,000 years at 0.3, 1, 5, and 25 mm spatial resolution, respectively. For Sr diffusion in plagioclase at 750 °C, timescales are accurate above 0.02, 0.2, 4, and 120 years at the same spatial resolutions. Our results highlight the importance of selecting appropriate analytical techniques to estimate accurate diffusion-based timescales.

  15. Compact and Hybrid Feature Description for Building Extraction

    NASA Astrophysics Data System (ADS)

    Li, Z.; Liu, Y.; Hu, Y.; Li, P.; Ding, Y.

    2017-05-01

    Building extraction in aerial orthophotos is crucial for various applications. Currently, deep learning has been shown to be successful in addressing building extraction with high accuracy and high robustness. However, quite a large number of samples is required in training a classifier when using deep learning model. In order to realize accurate and semi-interactive labelling, the performance of feature description is crucial, as it has significant effect on the accuracy of classification. In this paper, we bring forward a compact and hybrid feature description method, in order to guarantees desirable classification accuracy of the corners on the building roof contours. The proposed descriptor is a hybrid description of an image patch constructed from 4 sets of binary intensity tests. Experiments show that benefiting from binary description and making full use of color channels, this descriptor is not only computationally frugal, but also accurate than SURF for building extraction.

  16. A novel analytical description of periodic volume coil geometries in MRI

    NASA Astrophysics Data System (ADS)

    Koh, D.; Felder, J.; Shah, N. J.

    2018-03-01

    MRI volume coils can be represented by equivalent lumped element circuits and for a variety of these circuit configurations analytical design equations have been presented. The unification of several volume coil topologies results in a two-dimensional gridded equivalent lumped element circuit which compromises the birdcage resonator, its multiple endring derivative but also novel structures like the capacitive coupled ring resonator. The theory section analyzes a general two-dimensional circuit by noting that its current distribution can be decomposed into a longitudinal and an azimuthal dependency. This can be exploited to compare the current distribution with a transfer function of filter circuits along one direction. The resonances of the transfer function coincide with the resonance of the volume resonator and the simple analytical solution can be used as a design equation. The proposed framework is verified experimentally against a novel capacitive coupled ring structure which was derived from the general circuit formulation and is proven to exhibit a dominant homogeneous mode. In conclusion, a unified analytical framework is presented that allows determining the resonance frequency of any volume resonator that can be represented by a two dimensional meshed equivalent circuit.

  17. Addressing the Analytic Challenges of Cross-Sectional Pediatric Pneumonia Etiology Data

    PubMed Central

    Feikin, Daniel R.; Scott, J. Anthony G.; Zeger, Scott L.; Murdoch, David R.; O’Brien, Katherine L.; Deloria Knoll, Maria

    2017-01-01

    Abstract Despite tremendous advances in diagnostic laboratory technology, identifying the pathogen(s) causing pneumonia remains challenging because the infected lung tissue cannot usually be sampled for testing. Consequently, to obtain information about pneumonia etiology, clinicians and researchers test specimens distant to the site of infection. These tests may lack sensitivity (eg, blood culture, which is only positive in a small proportion of children with pneumonia) and/or specificity (eg, detection of pathogens in upper respiratory tract specimens, which may indicate asymptomatic carriage or a less severe syndrome, such as upper respiratory infection). While highly sensitive nucleic acid detection methods and testing of multiple specimens improve sensitivity, multiple pathogens are often detected and this adds complexity to the interpretation as the etiologic significance of results may be unclear (ie, the pneumonia may be caused by none, one, some, or all of the pathogens detected). Some of these challenges can be addressed by adjusting positivity rates to account for poor sensitivity or incorporating test results from controls without pneumonia to account for poor specificity. However, no classical analytic methods can account for measurement error (ie, sensitivity and specificity) for multiple specimen types and integrate the results of measurements for multiple pathogens to produce an accurate understanding of etiology. We describe the major analytic challenges in determining pneumonia etiology and review how the common analytical approaches (eg, descriptive, case-control, attributable fraction, latent class analysis) address some but not all challenges. We demonstrate how these limitations necessitate a new, integrated analytical approach to pneumonia etiology data. PMID:28575372

  18. An accurate behavioral model for single-photon avalanche diode statistical performance simulation

    NASA Astrophysics Data System (ADS)

    Xu, Yue; Zhao, Tingchen; Li, Ding

    2018-01-01

    An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.

  19. Analytic descriptions of stochastic bistable systems under force ramp

    DOE PAGES

    Friddle, Raymond W.

    2016-05-13

    Solving the two-state master equation with time-dependent rates, the ubiquitous driven bistable system, is a long-standing problem that does not permit a complete solution for all driving rates. We show an accurate approximation to this problem by considering the system in the control parameter regime. Moreover, the results are immediately applicable to a diverse range of bistable systems including single-molecule mechanics.

  20. Analytical modelling of temperature effects on an AMPA-type synapse.

    PubMed

    Kufel, Dominik S; Wojcik, Grzegorz M

    2018-05-11

    It was previously reported, that temperature may significantly influence neural dynamics on the different levels of brain function. Thus, in computational neuroscience, it would be useful to make models scalable for a wide range of various brain temperatures. However, lack of experimental data and an absence of temperature-dependent analytical models of synaptic conductance does not allow to include temperature effects at the multi-neuron modeling level. In this paper, we propose a first step to deal with this problem: A new analytical model of AMPA-type synaptic conductance, which is able to incorporate temperature effects in low-frequency stimulations. It was constructed based on Markov model description of AMPA receptor kinetics using the set of coupled ODEs. The closed-form solution for the set of differential equations was found using uncoupling assumption (introduced in the paper) with few simplifications motivated both from experimental data and from Monte Carlo simulation of synaptic transmission. The model may be used for computationally efficient and biologically accurate implementation of temperature effects on AMPA receptor conductance in large-scale neural network simulations. As a result, it may open a wide range of new possibilities for researching the influence of temperature on certain aspects of brain functioning.

  1. A semi-analytical description of protein folding that incorporates detailed geometrical information

    PubMed Central

    Suzuki, Yoko; Noel, Jeffrey K.; Onuchic, José N.

    2011-01-01

    Much has been done to study the interplay between geometric and energetic effects on the protein folding energy landscape. Numerical techniques such as molecular dynamics simulations are able to maintain a precise geometrical representation of the protein. Analytical approaches, however, often focus on the energetic aspects of folding, including geometrical information only in an average way. Here, we investigate a semi-analytical expression of folding that explicitly includes geometrical effects. We consider a Hamiltonian corresponding to a Gaussian filament with structure-based interactions. The model captures local features of protein folding often averaged over by mean-field theories, for example, loop contact formation and excluded volume. We explore the thermodynamics and folding mechanisms of beta-hairpin and alpha-helical structures as functions of temperature and Q, the fraction of native contacts formed. Excluded volume is shown to be an important component of a protein Hamiltonian, since it both dominates the cooperativity of the folding transition and alters folding mechanisms. Understanding geometrical effects in analytical formulae will help illuminate the consequences of the approximations required for the study of larger proteins. PMID:21721664

  2. Is Analytic Information Processing a Feature of Expertise in Medicine?

    ERIC Educational Resources Information Center

    McLaughlin, Kevin; Rikers, Remy M.; Schmidt, Henk G.

    2008-01-01

    Diagnosing begins by generating an initial diagnostic hypothesis by automatic information processing. Information processing may stop here if the hypothesis is accepted, or analytical processing may be used to refine the hypothesis. This description portrays analytic processing as an optional extra in information processing, leading us to…

  3. PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra

    NASA Astrophysics Data System (ADS)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-06-01

    The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).

  4. Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention

    PubMed Central

    Fisher, Brian; Smith, Jennifer; Pike, Ian

    2017-01-01

    Background: Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods: Inspired by the Delphi method, we introduced a novel methodology—group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders’ observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results: The GA methodology triggered the emergence of ‘common ground’ among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders’ verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusions: Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ‘common ground’ among diverse stakeholders about health data and their implications. PMID:28895928

  5. Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention.

    PubMed

    Al-Hajj, Samar; Fisher, Brian; Smith, Jennifer; Pike, Ian

    2017-09-12

    Background : Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods : Inspired by the Delphi method, we introduced a novel methodology-group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders' observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results : The GA methodology triggered the emergence of ' common g round ' among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders' verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusion s : Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ' common ground' among diverse stakeholders about health data and their implications.

  6. Faculty Forum: The GRE Analytical Writing Test-- Description and Utilization

    ERIC Educational Resources Information Center

    Briihl, Deborah S.; Wasieleski, David T.

    2007-01-01

    The authors surveyed graduate programs to see how they use the Graduate Record Examination Analytic Writing (GRE-AW) Test. Only 35% of the graduate programs that responded use the GRE-AW test in their admission policy; of the programs not using it, most do not plan to do so. The programs using the GRE-AW rated it as medium or low in importance in…

  7. Addressing the Analytic Challenges of Cross-Sectional Pediatric Pneumonia Etiology Data.

    PubMed

    Hammitt, Laura L; Feikin, Daniel R; Scott, J Anthony G; Zeger, Scott L; Murdoch, David R; O'Brien, Katherine L; Deloria Knoll, Maria

    2017-06-15

    Despite tremendous advances in diagnostic laboratory technology, identifying the pathogen(s) causing pneumonia remains challenging because the infected lung tissue cannot usually be sampled for testing. Consequently, to obtain information about pneumonia etiology, clinicians and researchers test specimens distant to the site of infection. These tests may lack sensitivity (eg, blood culture, which is only positive in a small proportion of children with pneumonia) and/or specificity (eg, detection of pathogens in upper respiratory tract specimens, which may indicate asymptomatic carriage or a less severe syndrome, such as upper respiratory infection). While highly sensitive nucleic acid detection methods and testing of multiple specimens improve sensitivity, multiple pathogens are often detected and this adds complexity to the interpretation as the etiologic significance of results may be unclear (ie, the pneumonia may be caused by none, one, some, or all of the pathogens detected). Some of these challenges can be addressed by adjusting positivity rates to account for poor sensitivity or incorporating test results from controls without pneumonia to account for poor specificity. However, no classical analytic methods can account for measurement error (ie, sensitivity and specificity) for multiple specimen types and integrate the results of measurements for multiple pathogens to produce an accurate understanding of etiology. We describe the major analytic challenges in determining pneumonia etiology and review how the common analytical approaches (eg, descriptive, case-control, attributable fraction, latent class analysis) address some but not all challenges. We demonstrate how these limitations necessitate a new, integrated analytical approach to pneumonia etiology data. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.

  8. Analytical description of the transverse Anderson localization of light

    NASA Astrophysics Data System (ADS)

    Schirmacher, Walter; Leonetti, Marco; Ruocco, Giancarlo

    2017-04-01

    We develop an analytical theory for describing the transverse localization properties of light beams in optical fibers with lateral disorder. This theory, which starts from the widely used paraxial approximation for the Helmholtz equation of the electric field, is a combination of an effective-medium theory for transverse disorder with the self-consistent localization theory of Vollhardt and Wölfle. We obtain explicit expressions for the dependence of the transverse localization length on the direction along the fiber. These results are in agreement with simulational data published recently by Karbasi et al. In particular we explain the focussing mechanism leading to the establishment of narrow transparent channels along the sample.

  9. Temporal Learning Analytics for Adaptive Assessment

    ERIC Educational Resources Information Center

    Papamitsiou, Zacharoula; Economides, Anastasios A.

    2014-01-01

    Accurate and early predictions of student performance could significantly affect interventions during teaching and assessment, which gradually could lead to improved learning outcomes. In our research, we seek to identify and formalize temporal parameters as predictors of performance ("temporal learning analytics" or TLA) and examine…

  10. Accurate mass and velocity functions of dark matter haloes

    NASA Astrophysics Data System (ADS)

    Comparat, Johan; Prada, Francisco; Yepes, Gustavo; Klypin, Anatoly

    2017-08-01

    N-body cosmological simulations are an essential tool to understand the observed distribution of galaxies. We use the MultiDark simulation suite, run with the Planck cosmological parameters, to revisit the mass and velocity functions. At redshift z = 0, the simulations cover four orders of magnitude in halo mass from ˜1011M⊙ with 8783 874 distinct haloes and 532 533 subhaloes. The total volume used is ˜515 Gpc3, more than eight times larger than in previous studies. We measure and model the halo mass function, its covariance matrix w.r.t halo mass and the large-scale halo bias. With the formalism of the excursion-set mass function, we explicit the tight interconnection between the covariance matrix, bias and halo mass function. We obtain a very accurate (<2 per cent level) model of the distinct halo mass function. We also model the subhalo mass function and its relation to the distinct halo mass function. The set of models obtained provides a complete and precise framework for the description of haloes in the concordance Planck cosmology. Finally, we provide precise analytical fits of the Vmax maximum velocity function up to redshift z < 2.3 to push for the development of halo occupation distribution using Vmax. The data and the analysis code are made publicly available in the Skies and Universes data base.

  11. ICDA: A Platform for Intelligent Care Delivery Analytics

    PubMed Central

    Gotz, David; Stavropoulos, Harry; Sun, Jimeng; Wang, Fei

    2012-01-01

    The identification of high-risk patients is a critical component in improving patient outcomes and managing costs. This paper describes the Intelligent Care Delivery Analytics platform (ICDA), a system which enables risk assessment analytics that process large collections of dynamic electronic medical data to identify at-risk patients. ICDA works by ingesting large volumes of data into a common data model, then orchestrating a collection of analytics that identify at-risk patients. It also provides an interactive environment through which users can access and review the analytics results. In addition, ICDA provides APIs via which analytics results can be retrieved to surface in external applications. A detailed review of ICDA’s architecture is provided. Descriptions of four use cases are included to illustrate ICDA’s application within two different data environments. These use cases showcase the system’s flexibility and exemplify the types of analytics it enables. PMID:23304296

  12. A highly accurate analytical solution for the surface fields of a short vertical wire antenna lying on a multilayer ground

    NASA Astrophysics Data System (ADS)

    Parise, M.

    2018-01-01

    A highly accurate analytical solution is derived to the electromagnetic problem of a short vertical wire antenna located on a stratified ground. The derivation consists of three steps. First, the integration path of the integrals describing the fields of the dipole is deformed and wrapped around the pole singularities and the two vertical branch cuts of the integrands located in the upper half of the complex plane. This allows to decompose the radiated field into its three contributions, namely the above-surface ground wave, the lateral wave, and the trapped surface waves. Next, the square root terms responsible for the branch cuts are extracted from the integrands of the branch-cut integrals. Finally, the extracted square roots are replaced with their rational representations according to Newton's square root algorithm, and residue theorem is applied to give explicit expressions, in series form, for the fields. The rigorous integration procedure and the convergence of square root algorithm ensure that the obtained formulas converge to the exact solution. Numerical simulations are performed to show the validity and robustness of the developed formulation, as well as its advantages in terms of time cost over standard numerical integration procedures.

  13. Fast and accurate computation of projected two-point functions

    NASA Astrophysics Data System (ADS)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithmOur code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  14. Continuum description of ionic and dielectric shielding for molecular-dynamics simulations of proteins in solution

    NASA Astrophysics Data System (ADS)

    Egwolf, Bernhard; Tavan, Paul

    2004-01-01

    We extend our continuum description of solvent dielectrics in molecular-dynamics (MD) simulations [B. Egwolf and P. Tavan, J. Chem. Phys. 118, 2039 (2003)], which has provided an efficient and accurate solution of the Poisson equation, to ionic solvents as described by the linearized Poisson-Boltzmann (LPB) equation. We start with the formulation of a general theory for the electrostatics of an arbitrarily shaped molecular system, which consists of partially charged atoms and is embedded in a LPB continuum. This theory represents the reaction field induced by the continuum in terms of charge and dipole densities localized within the molecular system. Because these densities cannot be calculated analytically for systems of arbitrary shape, we introduce an atom-based discretization and a set of carefully designed approximations. This allows us to represent the densities by charges and dipoles located at the atoms. Coupled systems of linear equations determine these multipoles and can be rapidly solved by iteration during a MD simulation. The multipoles yield the reaction field forces and energies. Finally, we scrutinize the quality of our approach by comparisons with an analytical solution restricted to perfectly spherical systems and with results of a finite difference method.

  15. Creating analytically divergence-free velocity fields from grid-based data

    NASA Astrophysics Data System (ADS)

    Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.

    2016-10-01

    We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.

  16. Building pit dewatering: application of transient analytic elements.

    PubMed

    Zaadnoordijk, Willem J

    2006-01-01

    Analytic elements are well suited for the design of building pit dewatering. Wells and drains can be modeled accurately by analytic elements, both nearby to determine the pumping level and at some distance to verify the targeted drawdown at the building site and to estimate the consequences in the vicinity. The ability to shift locations of wells or drains easily makes the design process very flexible. The temporary pumping has transient effects, for which transient analytic elements may be used. This is illustrated using the free, open-source, object-oriented analytic element simulator Tim(SL) for the design of a building pit dewatering near a canal. Steady calculations are complemented with transient calculations. Finally, the bandwidths of the results are estimated using linear variance analysis.

  17. A study of density effects in plasmas using analytical approximations for the self-consistent potential

    NASA Astrophysics Data System (ADS)

    Poirier, M.

    2015-06-01

    Density effects in ionized matter require particular attention since they modify energies, wavefunctions and transition rates with respect to the isolated-ion situation. The approach chosen in this paper is based on the ion-sphere model involving a Thomas-Fermi-like description for free electrons, the bound electrons being described by a full quantum mechanical formalism. This permits to deal with plasmas out of thermal local equilibrium, assuming only a Maxwell distribution for free electrons. For H-like ions, such a theory provides simple and rather accurate analytical approximations for the potential created by free electrons. Emphasis is put on the plasma potential rather than on the electron density, since the energies and wavefunctions depend directly on this potential. Beyond the uniform electron gas model, temperature effects may be analyzed. In the case of H-like ions, this formalism provides analytical perturbative expressions for the energies, wavefunctions and transition rates. Explicit expressions are given in the case of maximum orbital quantum number, and compare satisfactorily with results from a direct integration of the radial Schrödinger equation. Some formulas for lower orbital quantum numbers are also proposed.

  18. Towards an accurate description of perovskite ferroelectrics: exchange and correlation effects

    DOE PAGES

    Yuk, Simuck F.; Pitike, Krishna Chaitanya; Nakhmanson, Serge M.; ...

    2017-03-03

    Using the van der Waals density functional with C09 exchange (vdW-DF-C09), which has been applied to describing a wide range of dispersion-bound systems, we explore the physical properties of prototypical ABO 3 bulk ferroelectric oxides. Surprisingly, vdW-DF-C09 provides a superior description of experimental values for lattice constants, polarization and bulk moduli, exhibiting similar accuracy to the modified Perdew-Burke-Erzenhoff functional which was designed specifically for bulk solids (PBEsol). The relative performance of vdW-DF-C09 is strongly linked to the form of the exchange enhancement factor which, like PBEsol, tends to behave like the gradient expansion approximation for small reduced gradients. These resultsmore » suggest the general-purpose nature of the class of vdW-DF functionals, with particular consequences for predicting material functionality across dense and sparse matter regimes.« less

  19. Towards an accurate description of perovskite ferroelectrics: exchange and correlation effects

    PubMed Central

    Yuk, Simuck F.; Pitike, Krishna Chaitanya; Nakhmanson, Serge M.; Eisenbach, Markus; Li, Ying Wai; Cooper, Valentino R.

    2017-01-01

    Using the van der Waals density functional with C09 exchange (vdW-DF-C09), which has been applied to describing a wide range of dispersion-bound systems, we explore the physical properties of prototypical ABO3 bulk ferroelectric oxides. Surprisingly, vdW-DF-C09 provides a superior description of experimental values for lattice constants, polarization and bulk moduli, exhibiting similar accuracy to the modified Perdew-Burke-Erzenhoff functional which was designed specifically for bulk solids (PBEsol). The relative performance of vdW-DF-C09 is strongly linked to the form of the exchange enhancement factor which, like PBEsol, tends to behave like the gradient expansion approximation for small reduced gradients. These results suggest the general-purpose nature of the class of vdW-DF functionals, with particular consequences for predicting material functionality across dense and sparse matter regimes. PMID:28256544

  20. Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity

    PubMed Central

    Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.

    2010-01-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183

  1. Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.

    PubMed

    Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L

    2010-02-01

    An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.

  2. Analytical description of the ternary melt and solution crystallization with a non-linear phase diagram

    NASA Astrophysics Data System (ADS)

    Toropova, L. V.; Alexandrov, D. V.

    2018-05-01

    The directional solidification of a ternary system with an extended phase transition region is theoretically studied. A mathematical model is developed to describe quasi-stationary solidification, and its analytical solution is constructed with allowance for a nonlinear liquids line equation. We demonstrate that the phase diagram nonlinearity leads to substantial changes of analytical solutions.

  3. Accurate Sloshing Modes Modeling: A New Analytical Solution and its Consequences on Control

    NASA Astrophysics Data System (ADS)

    Gonidou, Luc-Olivier; Desmariaux, Jean

    2014-06-01

    This study addresses the issue of sloshing modes modeling for GNC analyses purposes. On European launchers, equivalent mechanical systems are commonly used for modeling sloshing effects on launcher dynamics. The representativeness of such a methodology is discussed here. First an exact analytical formulation of the launcher dynamics fitted with sloshing modes is proposed and discrepancies with equivalent mechanical system approach are emphasized. Then preliminary comparative GNC analyses are performed using the different models of dynamics in order to evaluate the impact of the aforementioned discrepancies from GNC standpoint. Special attention is paid to system stability.

  4. Importance of accurate measurements in nutrition research: dietary flavonoids as a case study

    USDA-ARS?s Scientific Manuscript database

    Accurate measurements of the secondary metabolites in natural products and plant foods are critical to establishing diet/health relationships. There are as many as 50,000 secondary metabolites which may influence human health. Their structural and chemical diversity present a challenge to analytic...

  5. The Analytic Hierarchy Process and Participatory Decisionmaking

    Treesearch

    Daniel L. Schmoldt; Daniel L. Peterson; Robert L. Smith

    1995-01-01

    Managing natural resource lands requires social, as well as biophysical, considerations. Unfortunately, it is extremely difficult to accurately assess and quantify changing social preferences, and to aggregate conflicting opinions held by diverse social groups. The Analytic Hierarchy Process (AHP) provides a systematic, explicit, rigorous, and robust mechanism for...

  6. 40 CFR 89.412 - Raw gaseous exhaust sampling and analytical system description.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Raw gaseous exhaust sampling and...-IGNITION ENGINES Exhaust Emission Test Procedures § 89.412 Raw gaseous exhaust sampling and analytical... must be incorporated in each system used for raw testing under this subpart. (1) [Reserved] (2) The...

  7. 40 CFR 89.412 - Raw gaseous exhaust sampling and analytical system description.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Raw gaseous exhaust sampling and...-IGNITION ENGINES Exhaust Emission Test Procedures § 89.412 Raw gaseous exhaust sampling and analytical... must be incorporated in each system used for raw testing under this subpart. (1) [Reserved] (2) The...

  8. A meta-analytic review of two modes of learning and the description-experience gap.

    PubMed

    Wulff, Dirk U; Mergenthaler-Canseco, Max; Hertwig, Ralph

    2018-02-01

    People can learn about the probabilistic consequences of their actions in two ways: One is by consulting descriptions of an action's consequences and probabilities (e.g., reading up on a medication's side effects). The other is by personally experiencing the probabilistic consequences of an action (e.g., beta testing software). In principle, people taking each route can reach analogous states of knowledge and consequently make analogous decisions. In the last dozen years, however, research has demonstrated systematic discrepancies between description- and experienced-based choices. This description-experience gap has been attributed to factors including reliance on a small set of experience, the impact of recency, and different weighting of probability information in the two decision types. In this meta-analysis focusing on studies using the sampling paradigm of decisions from experience, we evaluated these and other determinants of the decision-experience gap by reference to more than 70,000 choices made by more than 6,000 participants. We found, first, a robust description-experience gap but also a key moderator, namely, problem structure. Second, the largest determinant of the gap was reliance on small samples and the associated sampling error: free to terminate search, individuals explored too little to experience all possible outcomes. Third, the gap persisted when sampling error was basically eliminated, suggesting other determinants. Fourth, the occurrence of recency was contingent on decision makers' autonomy to terminate search, consistent with the notion of optional stopping. Finally, we found indications of different probability weighting in decisions from experience versus decisions from description when the problem structure involved a risky and a safe option. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Description and Recognition of the Concept of Social Capital in Higher Education System

    ERIC Educational Resources Information Center

    Tonkaboni, Forouzan; Yousefy, Alireza; Keshtiaray, Narges

    2013-01-01

    The current research is intended to describe and recognize the concept of social capital in higher education based on theoretical method in a descriptive-analytical approach. Description and Recognition of the data, gathered from theoretical and experimental studies, indicated that social capital is one of the most important indices for…

  10. Analytic and heuristic processing influences on adolescent reasoning and decision-making.

    PubMed

    Klaczynski, P A

    2001-01-01

    The normative/descriptive gap is the discrepancy between actual reasoning and traditional standards for reasoning. The relationship between age and the normative/descriptive gap was examined by presenting adolescents with a battery of reasoning and decision-making tasks. Middle adolescents (N = 76) performed closer to normative ideals than early adolescents (N = 66), although the normative/descriptive gap was large for both groups. Correlational analyses revealed that (1) normative responses correlated positively with each other, (2) nonnormative responses were positively interrelated, and (3) normative and nonnormative responses were largely independent. Factor analyses suggested that performance was based on two processing systems. The "analytic" system operates on "decontextualized" task representations and underlies conscious, computational reasoning. The "heuristic" system operates on "contextualized," content-laden representations and produces "cognitively cheap" responses that sometimes conflict with traditional norms. Analytic processing was more clearly linked to age and to intelligence than heuristic processing. Implications for cognitive development, the competence/performance issue, and rationality are discussed.

  11. Correction for isotopic interferences between analyte and internal standard in quantitative mass spectrometry by a nonlinear calibration function.

    PubMed

    Rule, Geoffrey S; Clark, Zlatuse D; Yue, Bingfang; Rockwood, Alan L

    2013-04-16

    Stable isotope-labeled internal standards are of great utility in providing accurate quantitation in mass spectrometry (MS). An implicit assumption has been that there is no "cross talk" between signals of the internal standard and the target analyte. In some cases, however, naturally occurring isotopes of the analyte do contribute to the signal of the internal standard. This phenomenon becomes more pronounced for isotopically rich compounds, such as those containing sulfur, chlorine, or bromine, higher molecular weight compounds, and those at high analyte/internal standard concentration ratio. This can create nonlinear calibration behavior that may bias quantitative results. Here, we propose the use of a nonlinear but more accurate fitting of data for these situations that incorporates one or two constants determined experimentally for each analyte/internal standard combination and an adjustable calibration parameter. This fitting provides more accurate quantitation in MS-based assays where contributions from analyte to stable labeled internal standard signal exist. It can also correct for the reverse situation where an analyte is present in the internal standard as an impurity. The practical utility of this approach is described, and by using experimental data, the approach is compared to alternative fits.

  12. Note: Model identification and analysis of bivalent analyte surface plasmon resonance data.

    PubMed

    Tiwari, Purushottam Babu; Üren, Aykut; He, Jin; Darici, Yesim; Wang, Xuewen

    2015-10-01

    Surface plasmon resonance (SPR) is a widely used, affinity based, label-free biophysical technique to investigate biomolecular interactions. The extraction of rate constants requires accurate identification of the particular binding model. The bivalent analyte model involves coupled non-linear differential equations. No clear procedure to identify the bivalent analyte mechanism has been established. In this report, we propose a unique signature for the bivalent analyte model. This signature can be used to distinguish the bivalent analyte model from other biphasic models. The proposed method is demonstrated using experimentally measured SPR sensorgrams.

  13. Biological Matrix Effects in Quantitative Tandem Mass Spectrometry-Based Analytical Methods: Advancing Biomonitoring

    PubMed Central

    Panuwet, Parinya; Hunter, Ronald E.; D’Souza, Priya E.; Chen, Xianyu; Radford, Samantha A.; Cohen, Jordan R.; Marder, M. Elizabeth; Kartavenka, Kostya; Ryan, P. Barry; Barr, Dana Boyd

    2015-01-01

    The ability to quantify levels of target analytes in biological samples accurately and precisely, in biomonitoring, involves the use of highly sensitive and selective instrumentation such as tandem mass spectrometers and a thorough understanding of highly variable matrix effects. Typically, matrix effects are caused by co-eluting matrix components that alter the ionization of target analytes as well as the chromatographic response of target analytes, leading to reduced or increased sensitivity of the analysis. Thus, before the desired accuracy and precision standards of laboratory data are achieved, these effects must be characterized and controlled. Here we present our review and observations of matrix effects encountered during the validation and implementation of tandem mass spectrometry-based analytical methods. We also provide systematic, comprehensive laboratory strategies needed to control challenges posed by matrix effects in order to ensure delivery of the most accurate data for biomonitoring studies assessing exposure to environmental toxicants. PMID:25562585

  14. service line analytics in the new era.

    PubMed

    Spence, Jay; Seargeant, Dan

    2015-08-01

    To succeed under the value-based business model, hospitals and health systems require effective service line analytics that combine inpatient and outpatient data and that incorporate quality metrics for evaluating clinical operations. When developing a framework for collection, analysis, and dissemination of service line data, healthcare organizations should focus on five key aspects of effective service line analytics: Updated service line definitions. Ability to analyze and trend service line net patient revenues by payment source. Access to accurate service line cost information across multiple dimensions with drill-through capabilities. Ability to redesign key reports based on changing requirements. Clear assignment of accountability.

  15. Analytic description of microcylindrical cavity for surface plasmon polariton

    NASA Astrophysics Data System (ADS)

    Tekkozyan, Vahan; Babajanyan, Arsen; Nerkararyan, Khachatur

    2013-09-01

    We consider the formation of the surface plasmon polariton (SPP) mode in the microcylinder cavity. Developed theoretical model allows to analytically calculate the closed-form expressions for the mode field distributions, resonant frequency, as well as the radiation and dissipative parts of quality factor of the structure in a broad wavelength range. For the conditions when a radius of a metallic cylinder is in order of SPP's wavelength, the highest value of Q-factor is achieved in infrared region of the spectrum where the absolute value of the real part of dielectric permittivity of the metal is much more than both the imaginary part of dielectric permittivity of the metal and the dielectric permittivity of surrounding media. Also, the radiation losses decrease with increasing of radius of cylinder. The obtained results give opportunity to find optimal conditions for having efficient emission in microcylinder cavity and it can serve as practical guidelines to design SPP microcavity for stimulated emission.

  16. Experimental and analytical characterization of triaxially braided textile composites

    NASA Technical Reports Server (NTRS)

    Masters, John E.; Fedro, Mark J.; Ifju, Peter G.

    1993-01-01

    There were two components, experimental and analytical, to this investigation of triaxially braided textile composite materials. The experimental portion of the study centered on measuring the materials' longitudinal and transverse tensile moduli, Poisson's ratio, and strengths. The identification of the damage mechanisms exhibited by these materials was also a prime objective of the experimental investigation. The analytical portion of the investigation utilized the Textile Composites Analysis (TECA) model to predict modulus and strength. The analytical and experimental results were compared to assess the effectiveness of the analysis. The figures contained in this paper reflect the presentation made at the conference. They may be divided into four sections: a definition of the material system tested; followed by a series of figures summarizing the experimental results (these figures contain results of a Moire interferometry study of the strain distribution in the material, examples and descriptions of the types of damage encountered in these materials, and a summary of the measured properties); a description of the TECA model follows the experimental results (this includes a series of predicted results and a comparison with measured values); and finally, a brief summary completes the paper.

  17. The description of a method for accurately estimating creatinine clearance in acute kidney injury.

    PubMed

    Mellas, John

    2016-05-01

    Acute kidney injury (AKI) is a common and serious condition encountered in hospitalized patients. The severity of kidney injury is defined by the RIFLE, AKIN, and KDIGO criteria which attempt to establish the degree of renal impairment. The KDIGO guidelines state that the creatinine clearance should be measured whenever possible in AKI and that the serum creatinine concentration and creatinine clearance remain the best clinical indicators of renal function. Neither the RIFLE, AKIN, nor KDIGO criteria estimate actual creatinine clearance. Furthermore there are no accepted methods for accurately estimating creatinine clearance (K) in AKI. The present study describes a unique method for estimating K in AKI using urine creatinine excretion over an established time interval (E), an estimate of creatinine production over the same time interval (P), and the estimated static glomerular filtration rate (sGFR), at time zero, utilizing the CKD-EPI formula. Using these variables estimated creatinine clearance (Ke)=E/P * sGFR. The method was tested for validity using simulated patients where actual creatinine clearance (Ka) was compared to Ke in several patients, both male and female, and of various ages, body weights, and degrees of renal impairment. These measurements were made at several serum creatinine concentrations in an attempt to determine the accuracy of this method in the non-steady state. In addition E/P and Ke was calculated in hospitalized patients, with AKI, and seen in nephrology consultation by the author. In these patients the accuracy of the method was determined by looking at the following metrics; E/P>1, E/P<1, E=P in an attempt to predict progressive azotemia, recovering azotemia, or stabilization in the level of azotemia respectively. In addition it was determined whether Ke<10 ml/min agreed with Ka and whether patients with AKI on renal replacement therapy could safely terminate dialysis if Ke was greater than 5 ml/min. In the simulated patients there

  18. Building the analytical response in frequency domain of AC biased bolometers. Application to Planck/HFI

    NASA Astrophysics Data System (ADS)

    Sauvé, Alexandre; Montier, Ludovic

    2016-12-01

    Context: Bolometers are high sensitivity detector commonly used in Infrared astronomy. The HFI instrument of the Planck satellite makes extensive use of them, but after the satellite launch two electronic related problems revealed critical. First an unexpected excess response of detectors at low optical excitation frequency for ν < 1 Hz, and secondly the Analog To digital Converter (ADC) component had been insufficiently characterized on-ground. These two problems require an exquisite knowledge of detector response. However bolometers have highly nonlinear characteristics, coming from their electrical and thermal coupling making them very difficult to model. Goal: We present a method to build the analytical transfer function in frequency domain which describe the voltage response of an Alternative Current (AC) biased bolometer to optical excitation, based on the standard bolometer model. This model is built using the setup of the Planck/HFI instrument and offers the major improvement of being based on a physical model rather than the currently in use had-hoc model based on Direct Current (DC) bolometer theory. Method: The analytical transfer function expression will be presented in matrix form. For this purpose, we build linearized versions of the bolometer electro thermal equilibrium. A custom description of signals in frequency is used to solve the problem with linear algebra. The model performances is validated using time domain simulations. Results: The provided expression is suitable for calibration and data processing. It can also be used to provide constraints for fitting optical transfer function using real data from steady state electronic response and optical response. The accurate description of electronic response can also be used to improve the ADC nonlinearity correction for quickly varying optical signals.

  19. Analytical Models of Legislative Texts for Muslim Scholars

    ERIC Educational Resources Information Center

    Alwan, Ammar Abdullah Naseh; Yusoff, Mohd Yakubzulkifli Bin Mohd; Al-Hami, Mohammad Said M.

    2011-01-01

    The significance of the analytical models in traditional Islamic studies is that they contribute in sharpening the intellectual capacity of the students of Islamic studies. Research literature in Islamic studies has descriptive side predominantly; the information is gathered and compiled and rarely analyzed properly. This weakness is because of…

  20. Post-analytical Issues in Hemostasis and Thrombosis Testing.

    PubMed

    Favaloro, Emmanuel J; Lippi, Giuseppe

    2017-01-01

    Analytical concerns within hemostasis and thrombosis testing are continuously decreasing. This is essentially attributable to modern instrumentation, improvements in test performance and reliability, as well as the application of appropriate internal quality control and external quality assurance measures. Pre-analytical issues are also being dealt with in some newer instrumentation, which are able to detect hemolysis, icteria and lipemia, and, in some cases, other issues related to sample collection such as tube under-filling. Post-analytical issues are generally related to appropriate reporting and interpretation of test results, and these are the focus of the current overview, which provides a brief description of these events, as well as guidance for their prevention or minimization. In particular, we propose several strategies for improved post-analytical reporting of hemostasis assays and advise that this may provide the final opportunity to prevent serious clinical errors in diagnosis.

  1. Forest landscape description and inventories - a basis for landplanning and design

    Treesearch

    R. Burton Litton

    1968-01-01

    Describes six analytical factors and seven compositional types useful in recognition and description of scenic resources. Illustrates their application in two inventories made to aid managers and landscape architects in planning and design.

  2. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Analytical and Experimental Study of Near-Threshold Interactions Between Crack Closure Mechanisms

    NASA Technical Reports Server (NTRS)

    Newman, John A.; Riddell, William T.; Piascik, Robert S.

    2003-01-01

    The results of an analytical closure model that considers contributions and interactions between plasticity-, roughness-, and oxide-induced crack closure mechanisms are presented and compared with experimental data. The analytical model is shown to provide a good description of the combined influences of crack roughness, oxide debris, and plasticity in the near-threshold regime. Furthermore, analytical results indicate that closure mechanisms interact in a non-linear manner such that the total amount of closure is not the sum of closure contributions for each mechanism.

  4. The use of cognitive task analysis to improve instructional descriptions of procedures.

    PubMed

    Clark, Richard E; Pugh, Carla M; Yates, Kenneth A; Inaba, Kenji; Green, Donald J; Sullivan, Maura E

    2012-03-01

    Surgical training relies heavily on the ability of expert surgeons to provide complete and accurate descriptions of a complex procedure. However, research from a variety of domains suggests that experts often omit critical information about the judgments, analysis, and decisions they make when solving a difficult problem or performing a complex task. In this study, we compared three methods for capturing surgeons' descriptions of how to perform the procedure for inserting a femoral artery shunt (unaided free-recall, unaided free-recall with simulation, and cognitive task analysis methods) to determine which method produced more accurate and complete results. Cognitive task analysis was approximately 70% more complete and accurate than free-recall and or free-recall during a simulation of the procedure. Ten expert trauma surgeons at a major urban trauma center were interviewed separately and asked to describe how to perform an emergency shunt procedure. Four surgeons provided an unaided free-recall description of the shunt procedure, five surgeons provided an unaided free-recall description of the procedure using visual aids and surgical instruments (simulation), and one (chosen randomly) was interviewed using cognitive task analysis (CTA) methods. An 11th vascular surgeon approved the final CTA protocol. The CTA interview with only one expert surgeon resulted in significantly greater accuracy and completeness of the descriptions compared with the unaided free-recall interviews with multiple expert surgeons. Surgeons in the unaided group omitted nearly 70% of necessary decision steps. In the free-recall group, heavy use of simulation improved surgeons' completeness when describing the steps of the procedure. CTA significantly increases the completeness and accuracy of surgeons' instructional descriptions of surgical procedures. In addition, simulation during unaided free-recall interviews may improve the completeness of interview data. Copyright © 2012 Elsevier Inc

  5. Towards an analytical framework for tailoring supercontinuum generation.

    PubMed

    Castelló-Lurbe, David; Vermeulen, Nathalie; Silvestre, Enrique

    2016-11-14

    A fully analytical toolbox for supercontinuum generation relying on scenarios without pulse splitting is presented. Furthermore, starting from the new insights provided by this formalism about the physical nature of direct and cascaded dispersive wave emission, a unified description of this radiation in both normal and anomalous dispersion regimes is derived. Previously unidentified physics of broadband spectra reported in earlier works is successfully explained on this basis. Finally, a foundry-compatible few-millimeters-long silicon waveguide allowing octave-spanning supercontinuum generation pumped at telecom wavelengths in the normal dispersion regime is designed, hence showcasing the potential of this new analytical approach.

  6. Accurate atomistic first-principles calculations of electronic stopping

    DOE PAGES

    Schleife, André; Kanai, Yosuke; Correa, Alfredo A.

    2015-01-20

    In this paper, we show that atomistic first-principles calculations based on real-time propagation within time-dependent density functional theory are capable of accurately describing electronic stopping of light projectile atoms in metal hosts over a wide range of projectile velocities. In particular, we employ a plane-wave pseudopotential scheme to solve time-dependent Kohn-Sham equations for representative systems of H and He projectiles in crystalline aluminum. This approach to simulate nonadiabatic electron-ion interaction provides an accurate framework that allows for quantitative comparison with experiment without introducing ad hoc parameters such as effective charges, or assumptions about the dielectric function. Finally, our work clearlymore » shows that this atomistic first-principles description of electronic stopping is able to disentangle contributions due to tightly bound semicore electrons and geometric aspects of the stopping geometry (channeling versus off-channeling) in a wide range of projectile velocities.« less

  7. Accurate thermoelastic tensor and acoustic velocities of NaCl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcondes, Michel L., E-mail: michel@if.usp.br; Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455; Shukla, Gaurav, E-mail: shukla@physics.umn.edu

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor bymore » using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.« less

  8. Analytic family of post-merger template waveforms

    NASA Astrophysics Data System (ADS)

    Del Pozzo, Walter; Nagar, Alessandro

    2017-06-01

    Building on the analytical description of the post-merger (ringdown) waveform of coalescing, nonprecessing, spinning binary black holes introduced by Damour and Nagar [Phys. Rev. D 90, 024054 (2014), 10.1103/PhysRevD.90.024054], we propose an analytic, closed form, time-domain, representation of the ℓ=m =2 gravitational radiation mode emitted after merger. This expression is given as a function of the component masses and dimensionless spins (m1 ,2,χ1 ,2) of the two inspiraling objects, as well as of the mass MBH and (complex) frequency σ1 of the fundamental quasinormal mode of the remnant black hole. Our proposed template is obtained by fitting the post-merger waveform part of several publicly available numerical relativity simulations from the Simulating eXtreme Spacetimes (SXS) catalog and then suitably interpolating over (symmetric) mass ratio and spins. We show that this analytic expression accurately reproduces (˜0.01 rad ) the phasing of the post-merger data of other data sets not used in its construction. This is notably the case of the spin-aligned run SXS:BBH:0305, whose intrinsic parameters are consistent with the 90% credible intervals reported in the parameter-estimation followup of GW150914 by B.P. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016), 10.1103/PhysRevLett.116.241102]. Using SXS waveforms as "experimental" data, we further show that our template could be used on the actual GW150914 data to perform a new measure of the complex frequency of the fundamental quasinormal mode so as to exploit the complete (high signal-to-noise-ratio) post-merger waveform. We assess the usefulness of our proposed template by analyzing, in a realistic setting, SXS full inspiral-merger-ringdown waveforms and constructing posterior probability distribution functions for the central frequency damping time of the first overtone of the fundamental quasinormal mode as well as for the physical parameters of the systems. We also briefly explore the possibility

  9. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  10. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  11. Accurate expansion of cylindrical paraxial waves for its straightforward implementation in electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Naserpour, Mahin; Zapata-Rodríguez, Carlos J.

    2018-01-01

    The evaluation of vector wave fields can be accurately performed by means of diffraction integrals, differential equations and also series expansions. In this paper, a Bessel series expansion which basis relies on the exact solution of the Helmholtz equation in cylindrical coordinates is theoretically developed for the straightforward yet accurate description of low-numerical-aperture focal waves. The validity of this approach is confirmed by explicit application to Gaussian beams and apertured focused fields in the paraxial regime. Finally we discuss how our procedure can be favorably implemented in scattering problems.

  12. An analytical treatment for three neutrino oscillations in the Earth

    NASA Astrophysics Data System (ADS)

    Aguilar-Arevalo, A. A.; D'Olivo, J. C.; Supanitsky, A. D.

    2012-08-01

    A simple, and at the same time accurate, description of the Earth matter effects on the oscillations between three neutrino flavors is given in terms of the Magnus expansion for the evolution operator.

  13. Langley Atmospheric Information Retrieval System (LAIRS): System description and user's guide

    NASA Technical Reports Server (NTRS)

    Boland, D. E., Jr.; Lee, T.

    1982-01-01

    This document presents the user's guide, system description, and mathematical specifications for the Langley Atmospheric Information Retrieval System (LAIRS). It also includes a description of an optimal procedure for operational use of LAIRS. The primary objective of the LAIRS Program is to make it possible to obtain accurate estimates of atmospheric pressure, density, temperature, and winds along Shuttle reentry trajectories for use in postflight data reduction.

  14. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  15. Patients' perception of noise in the operating room--a descriptive and analytic cross-sectional study.

    PubMed

    Hasfeldt, Dorthe; Maindal, Helle Terkildsen; Toft, Palle; Birkelund, Regner

    2014-10-01

    Noise is a general stressor that affects the cardiovascular system, resulting in increased blood pressure and heart rate, both of which can be problematic for the patient preparing for anesthesia and surgery. The purpose of this study was to investigate the patient's perception of noise in the OR before anesthesia, the correlation between the actual noise levels and the patient's perception of noise, and if there are particular patient subgroups that are especially vulnerable to noise. This cross-sectional study was performed within a mixed descriptive and analytical design, including 120 patients (60 acute/60 elective) undergoing general anesthesia for orthopaedic surgery. Data collection consisted of registration of demographic variables and measurements of noise levels in the OR combined with a questionnaire. Results showed that 10% of the patients perceived noise levels in the OR as very high and experienced the noise as annoying, disruptive, and stressful. There was no correlation between the actual noise levels to which patients were exposed and their perception of noise. Acute patients perceived significantly more noise than elective patients (P<.01), although they were actually exposed to less noise. Of the acute patients, those undergoing major surgery perceived more noise than patients undergoing minor surgery (P<.01), although actually exposed to less noise. There was a significant correlation between patients' sense of coherence (SOC) and their perception of noise (P<.01). Most patients who perceived noise levels as very high had a SOC below 50 (scale: 13-91). Perianesthesia nurses need to maintain their focus on keeping noise levels in the OR as low as possible. When caring for acute patients, patients undergoing major surgery and patients with a low SOC perianesthesia nurses should be particularly aware, as these patients might be more vulnerable to noise. Copyright © 2014 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All

  16. Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle

    NASA Astrophysics Data System (ADS)

    Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.

  17. AN ANALYTIC MODEL OF DUSTY, STRATIFIED, SPHERICAL H ii REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodríguez-Ramírez, J. C.; Raga, A. C.; Lora, V.

    2016-12-20

    We study analytically the effect of radiation pressure (associated with photoionization processes and with dust absorption) on spherical, hydrostatic H ii regions. We consider two basic equations, one for the hydrostatic balance between the radiation-pressure components and the gas pressure, and another for the balance among the recombination rate, the dust absorption, and the ionizing photon rate. Based on appropriate mathematical approximations, we find a simple analytic solution for the density stratification of the nebula, which is defined by specifying the radius of the external boundary, the cross section of dust absorption, and the luminosity of the central star. Wemore » compare the analytic solution with numerical integrations of the model equations of Draine, and find a wide range of the physical parameters for which the analytic solution is accurate.« less

  18. Big data analytics in immunology: a knowledge-based approach.

    PubMed

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.

  19. An accurate computational method for the diffusion regime verification

    NASA Astrophysics Data System (ADS)

    Zhokh, Alexey A.; Strizhak, Peter E.

    2018-04-01

    The diffusion regime (sub-diffusive, standard, or super-diffusive) is defined by the order of the derivative in the corresponding transport equation. We develop an accurate computational method for the direct estimation of the diffusion regime. The method is based on the derivative order estimation using the asymptotic analytic solutions of the diffusion equation with the integer order and the time-fractional derivatives. The robustness and the computational cheapness of the proposed method are verified using the experimental methane and methyl alcohol transport kinetics through the catalyst pellet.

  20. Micro-optics for microfluidic analytical applications.

    PubMed

    Yang, Hui; Gijs, Martin A M

    2018-02-19

    This critical review summarizes the developments in the integration of micro-optical elements with microfluidic platforms for facilitating detection and automation of bio-analytical applications. Micro-optical elements, made by a variety of microfabrication techniques, advantageously contribute to the performance of an analytical system, especially when the latter has microfluidic features. Indeed the easy integration of optical control and detection modules with microfluidic technology helps to bridge the gap between the macroscopic world and chip-based analysis, paving the way for automated and high-throughput applications. In our review, we start the discussion with an introduction of microfluidic systems and micro-optical components, as well as aspects of their integration. We continue with a detailed description of different microfluidic and micro-optics technologies and their applications, with an emphasis on the realization of optical waveguides and microlenses. The review continues with specific sections highlighting the advantages of integrated micro-optical components in microfluidic systems for tackling a variety of analytical problems, like cytometry, nucleic acid and protein detection, cell biology, and chemical analysis applications.

  1. Accurate monoenergetic electron parameters of laser wakefield in a bubble model

    NASA Astrophysics Data System (ADS)

    Raheli, A.; Rahmatallahpur, S. H.

    2012-11-01

    A reliable analytical expression for the potential of plasma waves with phase velocities near the speed of light is derived. The presented spheroid cavity model is more consistent than the previous spherical and ellipsoidal model and it explains the mono-energetic electron trajectory more accurately, especially at the relativistic region. As a result, the quasi-mono-energetic electrons output beam interacting with the laser plasma can be more appropriately described with this model.

  2. Parsimonious description for predicting high-dimensional dynamics

    PubMed Central

    Hirata, Yoshito; Takeuchi, Tomoya; Horai, Shunsuke; Suzuki, Hideyuki; Aihara, Kazuyuki

    2015-01-01

    When we observe a system, we often cannot observe all its variables and may have some of its limited measurements. Under such a circumstance, delay coordinates, vectors made of successive measurements, are useful to reconstruct the states of the whole system. Although the method of delay coordinates is theoretically supported for high-dimensional dynamical systems, practically there is a limitation because the calculation for higher-dimensional delay coordinates becomes more expensive. Here, we propose a parsimonious description of virtually infinite-dimensional delay coordinates by evaluating their distances with exponentially decaying weights. This description enables us to predict the future values of the measurements faster because we can reuse the calculated distances, and more accurately because the description naturally reduces the bias of the classical delay coordinates toward the stable directions. We demonstrate the proposed method with toy models of the atmosphere and real datasets related to renewable energy. PMID:26510518

  3. Analytical description of optical vortices generated by discretized vortex-producing lenses

    NASA Astrophysics Data System (ADS)

    Rumi, Gonzalo; Actis, Daniel; Amaya, Dafne; Gómez, Jorge A.; Rueda, Edgar; Lencina, Alberto

    2018-06-01

    In this article, a general analytical treatment (any topological charge—any number of discretization levels) for the diffraction of a Gaussian beam through a discretized vortex-producing lens is presented. In the proposal, the field is expressed as a sum of Kummer beams with different amplitudes and topological charges, which are focalized at different planes on the propagation axis. Likewise, it is demonstrated that characteristics of diffracted light can be modified by tuning the parameters of the setup. Vortex lines are analyzed to understand the internal mechanism of measurable topological charges that appear in specific planes, apparently violating topological charge conservation. Conservation of the topological charge is verified and theoretical predictions are supported by experiments.

  4. An analytic performance model of disk arrays and its application

    NASA Technical Reports Server (NTRS)

    Lee, Edward K.; Katz, Randy H.

    1991-01-01

    As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.

  5. Simple Parametric Model for Airfoil Shape Description

    NASA Astrophysics Data System (ADS)

    Ziemkiewicz, David

    2017-12-01

    We show a simple, analytic equation describing a class of two-dimensional shapes well suited for representation of aircraft airfoil profiles. Our goal was to create a description characterized by a small number of parameters with easily understandable meaning, providing a tool to alter the shape with optimization procedures as well as manual tweaks by the designer. The generated shapes are well suited for numerical analysis with 2D flow solving software such as XFOIL.

  6. Analytical performance of a bronchial genomic classifier.

    PubMed

    Hu, Zhanzhi; Whitney, Duncan; Anderson, Jessica R; Cao, Manqiu; Ho, Christine; Choi, Yoonha; Huang, Jing; Frink, Robert; Smith, Kate Porta; Monroe, Robert; Kennedy, Giulia C; Walsh, P Sean

    2016-02-26

    The current standard practice of lung lesion diagnosis often leads to inconclusive results, requiring additional diagnostic follow up procedures that are invasive and often unnecessary due to the high benign rate in such lesions (Chest 143:e78S-e92, 2013). The Percepta bronchial genomic classifier was developed and clinically validated to provide more accurate classification of lung nodules and lesions that are inconclusive by bronchoscopy, using bronchial brushing specimens (N Engl J Med 373:243-51, 2015, BMC Med Genomics 8:18, 2015). The analytical performance of the Percepta test is reported here. Analytical performance studies were designed to characterize the stability of RNA in bronchial brushing specimens during collection and shipment; analytical sensitivity defined as input RNA mass; analytical specificity (i.e. potentially interfering substances) as tested on blood and genomic DNA; and assay performance studies including intra-run, inter-run, and inter-laboratory reproducibility. RNA content within bronchial brushing specimens preserved in RNAprotect is stable for up to 20 days at 4 °C with no changes in RNA yield or integrity. Analytical sensitivity studies demonstrated tolerance to variation in RNA input (157 ng to 243 ng). Analytical specificity studies utilizing cancer positive and cancer negative samples mixed with either blood (up to 10 % input mass) or genomic DNA (up to 10 % input mass) demonstrated no assay interference. The test is reproducible from RNA extraction through to Percepta test result, including variation across operators, runs, reagent lots, and laboratories (standard deviation of 0.26 for scores on > 6 unit scale). Analytical sensitivity, analytical specificity and robustness of the Percepta test were successfully verified, supporting its suitability for clinical use.

  7. Healthcare predictive analytics: An overview with a focus on Saudi Arabia.

    PubMed

    Alharthi, Hana

    2018-03-08

    Despite a newfound wealth of data and information, the healthcare sector is lacking in actionable knowledge. This is largely because healthcare data, though plentiful, tends to be inherently complex and fragmented. Health data analytics, with an emphasis on predictive analytics, is emerging as a transformative tool that can enable more proactive and preventative treatment options. This review considers the ways in which predictive analytics has been applied in the for-profit business sector to generate well-timed and accurate predictions of key outcomes, with a focus on key features that may be applicable to healthcare-specific applications. Published medical research presenting assessments of predictive analytics technology in medical applications are reviewed, with particular emphasis on how hospitals have integrated predictive analytics into their day-to-day healthcare services to improve quality of care. This review also highlights the numerous challenges of implementing predictive analytics in healthcare settings and concludes with a discussion of current efforts to implement healthcare data analytics in the developing country, Saudi Arabia. Copyright © 2018 The Author. Published by Elsevier Ltd.. All rights reserved.

  8. PAUSE: Predictive Analytics Using SPARQL-Endpoints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R; Ainsworth, Keela; Bond, Nathaniel

    2014-07-11

    This invention relates to the medical industry and more specifically to methods of predicting risks. With the impetus towards personalized and evidence-based medicine, the need for a framework to analyze/interpret quantitative measurements (blood work, toxicology, etc.) with qualitative descriptions (specialist reports after reading images, bio-medical knowledgebase, etc.) to predict diagnostic risks is fast emerging. We describe a software solution that leverages hardware for scalable in-memory analytics and applies next-generation semantic query tools on medical data.

  9. Degradation Mechanisms in Solid-Oxide Fuel and Electrolyzer Cells: Analytical Description of Nickel Agglomeration in a Ni /Y S Z Electrode

    NASA Astrophysics Data System (ADS)

    Kröll, L.; de Haart, L. G. J.; Vinke, I.; Eichel, R.-A.

    2017-04-01

    The microstructural evolution of a porous electrode consisting of a metal-ceramic matrix, consisting of nickel and yttria-stabilized zirconia (Y S Z ), is one of the main degradation mechanisms in a solid-oxide cell (SOC), in either fuel cell or electrolyzer mode. In that respect, the agglomeration of nickel particles in a SOC electrode leads to a decrease in the electronic conductivity as well as in the active catalytic area for the oxidation-reduction reaction of the fuel-water steam. An analytical model of the agglomeration behavior of a Ni /Y S Z electrode is proposed that allows for a quantitative description of the nickel agglomeration. The accuracy of the model is validated in terms of a comparison with experimental degradation measurements. The model is based on contact probabilities of nickel clusters in a porous network of nickel and Y S Z , derived from an algorithm of the agglomeration process. The iterative algorithm is converted into an analytical function, which involves structural parameters of the electrode, such as the porosity and the nickel content. Furthermore, to describe the agglomeration mechanism, the influence of the steam content and the flux rate are taken into account via reactions on the nickel surface. In the next step, the developed agglomeration model is combined with the mechanism of the Ostwald ripening. The calculated grain-size growth is compared to measurements at different temperatures and under low flux rates and low steam content, as well as under high flux rates and high steam content. The results confirm the necessity of connecting the two mechanisms and clarify the circumstances in which the single processes occur and how they contribute to the total agglomeration of the particles in the electrode.

  10. Mixed Initiative Visual Analytics Using Task-Driven Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kristin A.; Cramer, Nicholas O.; Israel, David

    2015-12-07

    Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support tasks involved in discovery and sensemaking, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems, at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with such analytic models, such as inferring data models from user interactions to steer the underlying modelsmore » of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Researchers studying the sensemaking process have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present a candidate set of design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences on user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.« less

  11. Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO

    NASA Technical Reports Server (NTRS)

    Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.

    2016-01-01

    A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.

  12. High-Resolution Metabolomics Assessment of Military Personnel: Evaluating Analytical Strategies for Chemical Detection.

    PubMed

    Liu, Ken H; Walker, Douglas I; Uppal, Karan; Tran, ViLinh; Rohrbeck, Patricia; Mallon, Timothy M; Jones, Dean P

    2016-08-01

    The aim of this study was to maximize detection of serum metabolites with high-resolution metabolomics (HRM). Department of Defense Serum Repository (DoDSR) samples were analyzed using ultrahigh resolution mass spectrometry with three complementary chromatographic phases and four ionization modes. Chemical coverage was evaluated by number of ions detected and accurate mass matches to a human metabolomics database. Individual HRM platforms provided accurate mass matches for up to 58% of the KEGG metabolite database. Combining two analytical methods increased matches to 72% and included metabolites in most major human metabolic pathways and chemical classes. Detection and feature quality varied by analytical configuration. Dual chromatography HRM with positive and negative electrospray ionization provides an effective generalized method for metabolic assessment of military personnel.

  13. High-resolution metabolomics assessment of military personnel: Evaluating analytical strategies for chemical detection

    PubMed Central

    Liu, Ken H.; Walker, Douglas I.; Uppal, Karan; Tran, ViLinh; Rohrbeck, Patricia; Mallon, Timothy M.; Jones, Dean P.

    2016-01-01

    Objective To maximize detection of serum metabolites with high-resolution metabolomics (HRM). Methods Department of Defense Serum Repository (DoDSR) samples were analyzed using ultra-high resolution mass spectrometry with three complementary chromatographic phases and four ionization modes. Chemical coverage was evaluated by number of ions detected and accurate mass matches to a human metabolomics database. Results Individual HRM platforms provided accurate mass matches for up to 58% of the KEGG metabolite database. Combining two analytical methods increased matches to 72%, and included metabolites in most major human metabolic pathways and chemical classes. Detection and feature quality varied by analytical configuration. Conclusions Dual chromatography HRM with positive and negative electrospray ionization provides an effective generalized method for metabolic assessment of military personnel. PMID:27501105

  14. Descriptive studies: what they can and cannot do.

    PubMed

    Grimes, David A; Schulz, Kenneth F

    2002-01-12

    Descriptive studies often represent the first scientific toe in the water in new areas of inquiry. A fundamental element of descriptive reporting is a clear, specific, and measurable definition of the disease or condition in question. Like newspapers, good descriptive reporting answers the five basic W questions: who, what, why, when, where. and a sixth: so what? Case reports, case-series reports, cross-sectional studies, and surveillance studies deal with individuals, whereas ecological correlational studies examine populations. The case report is the least-publishable unit in medical literature. Case-series reports aggregate individual cases in one publication. Clustering of unusual cases in a short period often heralds a new epidemic, as happened with AIDS. Cross-sectional (prevalence) studies describe the health of populations. Surveillance can be thought of as watchfulness over a community; feedback to those who need to know is an integral component of surveillance. Ecological correlational studies look for associations between exposures and outcomes in populations-eg, per capita cigarette sales and rates of coronary artery disease-rather than in individuals. Three important uses of descriptive studies include trend analysis, health-care planning, and hypothesis generation. A frequent error in reports of descriptive studies is overstepping the data: studies without a comparison group allow no inferences to be drawn about associations, causal or otherwise. Hypotheses about causation from descriptive studies are often tested in rigorous analytical studies.

  15. Rapid perfusion quantification using Welch-Satterthwaite approximation and analytical spectral filtering

    NASA Astrophysics Data System (ADS)

    Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.

    2017-02-01

    CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.

  16. Analytic barrage attack model. Final report, January 1986-January 1989

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    St Ledger, J.W.; Naegeli, R.E.; Dowden, N.A.

    An analytic model is developed for a nuclear barrage attack, assuming weapons with no aiming error and a cookie-cutter damage function. The model is then extended with approximations for the effects of aiming error and distance damage sigma. The final result is a fast running model which calculates probability of damage for a barrage attack. The probability of damage is accurate to within seven percent or better, for weapon reliabilities of 50 to 100 percent, distance damage sigmas of 0.5 or less, and zero to very large circular error probabilities. FORTRAN 77 coding is included in the report for themore » analytic model and for a numerical model used to check the analytic results.« less

  17. On the use of Lagrangian variables in descriptions of unsteady boundary-layer separation

    NASA Technical Reports Server (NTRS)

    Cowley, Stephen J.; Vandommelen, Leon L.; Lam, Shui T.

    1990-01-01

    The Lagrangian description of unsteady boundary layer separation is reviewed from both analytical and numerical perspectives. It is explained in simple terms how particle distortion gives rise to unsteady separation, and why a theory centered on Lagrangian coordinates provides the clearest description of this phenomenon. Some of the more recent results for unsteady three dimensional compressible separation are included. The different forms of separation that can arise from symmetries are emphasized. A possible description of separation is also included when the detaching vorticity layer exits the classical boundary layer region, but still remains much closer to the surface than a typical body-lengthscale.

  18. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework

    PubMed Central

    Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.

    2016-01-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968

  19. Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework

    NASA Astrophysics Data System (ADS)

    Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.

    2016-05-01

    Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.

  20. Fast analytical scatter estimation using graphics processing units.

    PubMed

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  1. Funnel metadynamics as accurate binding free-energy method

    PubMed Central

    Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele

    2013-01-01

    A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839

  2. Approximate analytical description of the elastic strain field due to an inclusion in a continuous medium with cubic anisotropy

    NASA Astrophysics Data System (ADS)

    Nenashev, A. V.; Koshkarev, A. A.; Dvurechenskii, A. V.

    2018-03-01

    We suggest an approach to the analytical calculation of the strain distribution due to an inclusion in elastically anisotropic media for the case of cubic anisotropy. The idea consists in the approximate reduction of the anisotropic problem to a (simpler) isotropic problem. This gives, for typical semiconductors, an improvement in accuracy by an order of magnitude, compared to the isotropic approximation. Our method allows using, in the case of elastically anisotropic media, analytical solutions obtained for isotropic media only, such as analytical formulas for the strain due to polyhedral inclusions. The present work substantially extends the applicability of analytical results, making them more suitable for describing real systems, such as epitaxial quantum dots.

  3. A new algebraic turbulence model for accurate description of airfoil flows

    NASA Astrophysics Data System (ADS)

    Xiao, Meng-Juan; She, Zhen-Su

    2017-11-01

    We report a new algebraic turbulence model (SED-SL) based on the SED theory, a symmetry-based approach to quantifying wall turbulence. The model specifies a multi-layer profile of a stress length (SL) function in both the streamwise and wall-normal directions, which thus define the eddy viscosity in the RANS equation (e.g. a zero-equation model). After a successful simulation of flat plate flow (APS meeting, 2016), we report here further applications of the model to the flow around airfoil, with significant improvement of the prediction accuracy of the lift (CL) and drag (CD) coefficients compared to other popular models (e.g. BL, SA, etc.). Two airfoils, namely RAE2822 airfoil and NACA0012 airfoil, are computed for over 50 cases. The results are compared to experimental data from AGARD report, which shows deviations of CL bounded within 2%, and CD within 2 counts (10-4) for RAE2822 and 6 counts for NACA0012 respectively (under a systematic adjustment of the flow conditions). In all these calculations, only one parameter (proportional to the Karmen constant) shows slight variation with Mach number. The most remarkable outcome is, for the first time, the accurate prediction of the drag coefficient. The other interesting outcome is the physical interpretation of the multi-layer parameters: they specify the corresponding multi-layer structure of turbulent boundary layer; when used together with simulation data, the SED-SL enables one to extract physical information from empirical data, and to understand the variation of the turbulent boundary layer.

  4. Analytic and Heuristic Processing Influences on Adolescent Reasoning and Decision-Making.

    ERIC Educational Resources Information Center

    Klaczynski, Paul A.

    2001-01-01

    Examined the relationship between age and the normative/descriptive gap--the discrepancy between actual reasoning and traditional standards for reasoning. Found that middle adolescents performed closer to normative ideals than early adolescents. Factor analyses suggested that performance was based on two processing systems, analytic and heuristic…

  5. Analytical Chemistry: A Literary Approach

    NASA Astrophysics Data System (ADS)

    Lucy, Charles A.

    2000-04-01

    The benefits of incorporating real-world examples of chemistry into lectures and lessons is reflected by the recent inclusion of the Teaching with Problems and Case Studies column in this Journal. However, these examples lie outside the experience of many students, and so much of the impact of "real-world" examples is lost. This paper provides an anthology of references to analytical chemistry techniques from history, popular fiction, and film. Such references are amusing to both instructor and student. Further, the fictional descriptions can serve as a focal point for discussions of a technique's true capabilities and limitations.

  6. The analytical validation of the Oncotype DX Recurrence Score assay

    PubMed Central

    Baehner, Frederick L

    2016-01-01

    In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score® result (scale: 0–100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time. PMID:27729940

  7. The analytical validation of the Oncotype DX Recurrence Score assay.

    PubMed

    Baehner, Frederick L

    2016-01-01

    In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX ® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score ® result (scale: 0-100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time.

  8. A survey on platforms for big data analytics.

    PubMed

    Singh, Dilpreet; Reddy, Chandan K

    The primary purpose of this paper is to provide an in-depth analysis of different platforms available for performing big data analytics. This paper surveys different hardware platforms available for big data analytics and assesses the advantages and drawbacks of each of these platforms based on various metrics such as scalability, data I/O rate, fault tolerance, real-time processing, data size supported and iterative task support. In addition to the hardware, a detailed description of the software frameworks used within each of these platforms is also discussed along with their strengths and drawbacks. Some of the critical characteristics described here can potentially aid the readers in making an informed decision about the right choice of platforms depending on their computational needs. Using a star ratings table, a rigorous qualitative comparison between different platforms is also discussed for each of the six characteristics that are critical for the algorithms of big data analytics. In order to provide more insights into the effectiveness of each of the platform in the context of big data analytics, specific implementation level details of the widely used k-means clustering algorithm on various platforms are also described in the form pseudocode.

  9. FAST COGNITIVE AND TASK ORIENTED, ITERATIVE DATA DISPLAY (FACTOID)

    DTIC Science & Technology

    2017-06-01

    approaches. As a result, the following assumptions guided our efforts in developing modeling and descriptive metrics for evaluation purposes...Application Evaluation . Our analytic workflow for evaluation is to first provide descriptive statistics about applications across metrics (performance...distributions for evaluation purposes because the goal of evaluation is accurate description , not inference (e.g., prediction). Outliers depicted

  10. Analytical procedures for water-soluble vitamins in foods and dietary supplements: a review.

    PubMed

    Blake, Christopher J

    2007-09-01

    Water-soluble vitamins include the B-group vitamins and vitamin C. In order to correctly monitor water-soluble vitamin content in fortified foods for compliance monitoring as well as to establish accurate data banks, an accurate and precise analytical method is a prerequisite. For many years microbiological assays have been used for analysis of B vitamins. However they are no longer considered to be the gold standard in vitamins analysis as many studies have shown up their deficiencies. This review describes the current status of analytical methods, including microbiological assays and spectrophotometric, biosensor and chromatographic techniques. In particular it describes the current status of the official methods and highlights some new developments in chromatographic procedures and detection methods. An overview is made of multivitamin extractions and analyses for foods and supplements.

  11. Accurate expressions for solar cell fill factors including series and shunt resistances

    NASA Astrophysics Data System (ADS)

    Green, Martin A.

    2016-02-01

    Together with open-circuit voltage and short-circuit current, fill factor is a key solar cell parameter. In their classic paper on limiting efficiency, Shockley and Queisser first investigated this factor's analytical properties showing, for ideal cells, it could be expressed implicitly in terms of the maximum power point voltage. Subsequently, fill factors usually have been calculated iteratively from such implicit expressions or from analytical approximations. In the absence of detrimental series and shunt resistances, analytical fill factor expressions have recently been published in terms of the Lambert W function available in most mathematical computing software. Using a recently identified perturbative relationship, exact expressions in terms of this function are derived in technically interesting cases when both series and shunt resistances are present but have limited impact, allowing a better understanding of their effect individually and in combination. Approximate expressions for arbitrary shunt and series resistances are then deduced, which are significantly more accurate than any previously published. A method based on the insights developed is also reported for deducing one-diode fits to experimental data.

  12. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  13. Unified description of astrophysical properties of neutron stars independent of the equation of state

    NASA Astrophysics Data System (ADS)

    Pappas, George

    2015-12-01

    In recent years, a lot of work was done that has revealed some very interesting properties of neutron stars. One can relate the first few multipole moments of a neutron star, or quantities that can be derived from them, with relations that are independent of the equation of state (EoS). This is a very significant result that has great implications for the description of neutron stars and in particular for the description of the spacetime around them. Additionally, it was recently shown that there is a four-parameter analytic spacetime, known as the two-soliton spacetime, which can accurately capture the properties of the geometry around neutron stars. This allows for the possibility of describing in a unified formalism the astrophysically relevant properties of the spacetime around a neutron star independently of the particulars of the EoS for the matter of the star. More precisely, the description of these astrophysical properties is done using an EoS omniscient spacetime that can describe the exterior of any neutron star. In the present work, we investigate properties such as the location of the innermost stable circular orbit RISCO (or the surface of the star when the latter overcomes the former), the various frequencies of perturbed circular equatorial geodesics, the efficiency of an accretion disc, its temperature distribution, and other properties associated with the emitted radiation from the disc, in a way that holds for all possible choices of a realistic EoS for the neutron star. Furthermore, we provide proof of principle that if one were to measure the right combinations of pairs of these properties, with the additional knowledge of the mass of the neutron star, one could determine the EoS of the star.

  14. Pulsed plane wave analytic solutions for generic shapes and the validation of Maxwell's equations solvers

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Vastano, John A.; Lomax, Harvard

    1992-01-01

    Generic shapes are subjected to pulsed plane waves of arbitrary shape. The resulting scattered electromagnetic fields are determined analytically. These fields are then computed efficiently at field locations for which numerically determined EM fields are required. Of particular interest are the pulsed waveform shapes typically utilized by radar systems. The results can be used to validate the accuracy of finite difference time domain Maxwell's equations solvers. A two-dimensional solver which is second- and fourth-order accurate in space and fourth-order accurate in time is examined. Dielectric media properties are modeled by a ramping technique which simplifies the associated gridding of body shapes. The attributes of the ramping technique are evaluated by comparison with the analytic solutions.

  15. Self-descriptions on LinkedIn: Recruitment or friendship identity?

    PubMed

    Garcia, Danilo; Cloninger, Kevin M; Granjard, Alexandre; Molander-Söderholm, Kristian; Amato, Clara; Sikström, Sverker

    2018-04-26

    We used quantitative semantics to find clusters of words in LinkedIn users' self-descriptions to an employer or a friend. Some of these clusters discriminated between worker and friend conditions (e.g., flexible vs. caring) and between LinkedIn users with high and low education (e.g., analytical vs. messy). © 2018 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  16. The slow-scale linear noise approximation: an accurate, reduced stochastic description of biochemical networks under timescale separation conditions

    PubMed Central

    2012-01-01

    Background It is well known that the deterministic dynamics of biochemical reaction networks can be more easily studied if timescale separation conditions are invoked (the quasi-steady-state assumption). In this case the deterministic dynamics of a large network of elementary reactions are well described by the dynamics of a smaller network of effective reactions. Each of the latter represents a group of elementary reactions in the large network and has associated with it an effective macroscopic rate law. A popular method to achieve model reduction in the presence of intrinsic noise consists of using the effective macroscopic rate laws to heuristically deduce effective probabilities for the effective reactions which then enables simulation via the stochastic simulation algorithm (SSA). The validity of this heuristic SSA method is a priori doubtful because the reaction probabilities for the SSA have only been rigorously derived from microscopic physics arguments for elementary reactions. Results We here obtain, by rigorous means and in closed-form, a reduced linear Langevin equation description of the stochastic dynamics of monostable biochemical networks in conditions characterized by small intrinsic noise and timescale separation. The slow-scale linear noise approximation (ssLNA), as the new method is called, is used to calculate the intrinsic noise statistics of enzyme and gene networks. The results agree very well with SSA simulations of the non-reduced network of elementary reactions. In contrast the conventional heuristic SSA is shown to overestimate the size of noise for Michaelis-Menten kinetics, considerably under-estimate the size of noise for Hill-type kinetics and in some cases even miss the prediction of noise-induced oscillations. Conclusions A new general method, the ssLNA, is derived and shown to correctly describe the statistics of intrinsic noise about the macroscopic concentrations under timescale separation conditions. The ssLNA provides a

  17. Measuring bio-oil upgrade intermediates and corrosive species with polarity-matched analytical approaches

    DOE PAGES

    Connatser, Raynella M.; Lewis, Sr., Samuel Arthur; Keiser, James R.; ...

    2014-10-03

    Integrating biofuels with conventional petroleum products requires improvements in processing to increase blendability with existing fuels. This work demonstrates analysis techniques for more hydrophilic bio-oil liquids that give improved quantitative and qualitative description of the total acid content and organic acid profiles. To protect infrastructure from damage and reduce the cost associated with upgrading, accurate determination of acid content and representative chemical compound analysis are central imperatives to assessing both the corrosivity and the progress toward removing oxygen and acidity in processed biomass liquids. Established techniques form an ample basis for bio-liquids evaluation. However, early in the upgrading process, themore » unique physical phases and varied hydrophilicity of many pyrolysis liquids can render analytical methods originally designed for use in petroleum-derived oils inadequate. In this work, the water solubility of the organic acids present in bio-oils is exploited in a novel extraction and titration technique followed by analysis on the water-based capillary electrophoresis (CE) platform. The modification of ASTM D664, the standard for Total Acid Number (TAN), to include aqueous carrier solvents improves the utility of that approach for quantifying acid content in hydrophilic bio-oils. Termed AMTAN (modified Total Acid Number), this technique offers 1.2% relative standard deviation and dynamic range comparable to the conventional ASTM method. Furthermore, the results of corrosion product evaluations using several different sources of real bio-oil are discussed in the context of the unique AMTAN and CE analytical approaches developed to facilitate those measurements.« less

  18. Description of a Generalized Analytical Model for the Micro-dosimeter Response

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; Stewart-Sloan, Charlotte R.; Xapsos, Michael A.; Shinn, Judy L.; Wilson, John W.; Hunter, Abigail

    2007-01-01

    An analytical prediction capability for space radiation in Low Earth Orbit (LEO), correlated with the Space Transportation System (STS) Shuttle Tissue Equivalent Proportional Counter (TEPC) measurements, is presented. The model takes into consideration the energy loss straggling and chord length distribution of the TEPC detector, and is capable of predicting energy deposition fluctuations in a micro-volume by incoming ions through both direct and indirect ionic events. The charged particle transport calculations correlated with STS 56, 51, 110 and 114 flights are accomplished by utilizing the most recent version (2005) of the Langley Research Center (LaRC) deterministic ionized particle transport code High charge (Z) and Energy TRaNsport WZETRN), which has been extensively validated with laboratory beam measurements and available space flight data. The agreement between the TEPC model prediction (response function) and the TEPC measured differential and integral spectra in lineal energy (y) domain is promising.

  19. The Use and Abuse of Limits of Detection in Environmental Analytical Chemistry

    PubMed Central

    Brown, Richard J. C.

    2008-01-01

    The limit of detection (LoD) serves as an important method performance measure that is useful for the comparison of measurement techniques and the assessment of likely signal to noise performance, especially in environmental analytical chemistry. However, the LoD is only truly related to the precision characteristics of the analytical instrument employed for the analysis and the content of analyte in the blank sample. This article discusses how other criteria, such as sampling volume, can serve to distort the quoted LoD artificially and make comparison between various analytical methods inequitable. In order to compare LoDs between methods properly, it is necessary to state clearly all of the input parameters relating to the measurements that have been used in the calculation of the LoD. Additionally, the article discusses that the use of LoDs in contexts other than the comparison of the attributes of analytical methods, in particular when reporting analytical results, may be confusing, less informative than quoting the actual result with an accompanying statement of uncertainty, and may act to bias descriptive statistics. PMID:18690384

  20. The use of analytical models in human-computer interface design

    NASA Technical Reports Server (NTRS)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  1. On the Helix Propensity in Generalized Born Solvent Descriptions of Modeling the Dark Proteome

    DTIC Science & Technology

    2017-01-10

    benchmarks of conformational sampling methods and their all-atom force fields plus solvent descriptions to accurately model structural transitions on a...atom simulations of proteins is the replacement of explicit water interactions with a continuum description of treating implicitly the bulk physical... structure was reported by Amarasinghe and coworkers (Leung et al., 2015) of the Ebola nucleoprotein NP in complex with a 28-residue peptide extracted

  2. A singularity free analytical solution of artificial satellite motion with drag

    NASA Technical Reports Server (NTRS)

    Mueller, A.

    1978-01-01

    An analytical satellite theory based on the regular, canonical Poincare-Similar (PS phi) elements is described along with an accurate density model which can be implemented into the drag theory. A computationally efficient manner in which to expand the equations of motion into a fourier series is discussed.

  3. Exact analytic solution for non-linear density fluctuation in a ΛCDM universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Jaiyul; Gong, Jinn-Ouk, E-mail: jyoo@physik.uzh.ch, E-mail: jinn-ouk.gong@apctp.org

    We derive the exact third-order analytic solution of the matter density fluctuation in the proper-time hypersurface in a ΛCDM universe, accounting for the explicit time-dependence and clarifying the relation to the initial condition. Furthermore, we compare our analytic solution to the previous calculation in the comoving gauge, and to the standard Newtonian perturbation theory by providing Fourier kernels for the relativistic effects. Our results provide an essential ingredient for a complete description of galaxy bias in the relativistic context.

  4. Recommended procedures and techniques for the petrographic description of bituminous coals

    USGS Publications Warehouse

    Chao, E.C.T.; Minkin, J.A.; Thompson, C.L.

    1982-01-01

    Modern coal petrology requires rapid and precise description of great numbers of coal core or bench samples in order to acquire the information required to understand and predict vertical and lateral variation of coal quality for correlation with coal-bed thickness, depositional environment, suitability for technological uses, etc. Procedures for coal description vary in accordance with the objectives of the description. To achieve our aim of acquiring the maximum amount of quantitative information within the shortest period of time, we have adopted a combined megascopic-microscopic procedure. Megascopic analysis is used to identify the distinctive lithologies present, and microscopic analysis is required only to describe representative examples of the mixed lithologies observed. This procedure greatly decreases the number of microscopic analyses needed for adequate description of a sample. For quantitative megascopic description of coal microlithotypes, microlithotype assemblages, and lithotypes, we use (V) for vitrite or vitrain, (E) for liptite, (I) for inertite or fusain, (M) for mineral layers or lenses other than iron sulfide, (S) for iron sulfide, and (X1), (X2), etc. for mixed lithologies. Microscopic description is expressed in terms of V representing the vitrinite maceral group, E the exinite group, I the inertinite group, and M mineral components. volume percentages are expressed as subscripts. Thus (V)20(V80E10I5M5)80 indicates a lithotype or assemblage of microlithotypes consisting of 20 vol. % vitrite and 80% of a mixed lithology having a modal maceral composition V80E10I5M5. This bulk composition can alternatively be recalculated and described as V84E8I4M4. To generate these quantitative data rapidly and accurately, we utilize an automated image analysis system (AIAS). Plots of VEIM data on easily constructed ternary diagrams provide readily comprehended illustrations of the range of modal composition of the lithologic units making up a given coal

  5. Description of deformed nuclei in the sdg boson model

    NASA Astrophysics Data System (ADS)

    Li, S. C.; Kuyucak, S.

    1996-02-01

    We present a study of deformed nuclei in the framework of the sdg interacting boson model utilizing both numerical diagonalization and analytical {1}/{N} expansion techniques. The focus is on the description of high-spin states which have recently become computationally accessible through the use of computer algebra in the {1}/{N} expansion formalism. A systematic study is made of high-spin states in rare-earth and actinide nuclei.

  6. Analytic theory for the selection of 2-D needle crystal at arbitrary Peclet number

    NASA Technical Reports Server (NTRS)

    Tanveer, Saleh

    1989-01-01

    An accurate analytic theory is presented for the velocity selection of a two-dimensional needle crystal for arbitrary Peclet number for small values of the surface tension parameter. The velocity selection is caused by the effect of transcendentally small terms which are determined by analytic continuation to the complex plane and analysis of nonlinear equations. The work supports the general conclusion of previous small Peclet number analytical results of other investigators, though there are some discrepancies in details. It also addresses questions raised on the validity of selection theory owing to assumptions made on shape corrections at large distances from the tip.

  7. Analytical formulation of impulsive collision avoidance dynamics

    NASA Astrophysics Data System (ADS)

    Bombardelli, Claudio

    2014-02-01

    The paper deals with the problem of impulsive collision avoidance between two colliding objects in three dimensions and assuming elliptical Keplerian orbits. Closed-form analytical expressions are provided that accurately predict the relative dynamics of the two bodies in the encounter b-plane following an impulsive delta-V manoeuvre performed by one object at a given orbit location prior to the impact and with a generic three-dimensional orientation. After verifying the accuracy of the analytical expressions for different orbital eccentricities and encounter geometries the manoeuvre direction that maximises the miss distance is obtained numerically as a function of the arc length separation between the manoeuvre point and the predicted collision point. The provided formulas can be used for high-accuracy instantaneous estimation of the outcome of a generic impulsive collision avoidance manoeuvre and its optimisation.

  8. Creating Body Shapes From Verbal Descriptions by Linking Similarity Spaces.

    PubMed

    Hill, Matthew Q; Streuber, Stephan; Hahn, Carina A; Black, Michael J; O'Toole, Alice J

    2016-11-01

    Brief verbal descriptions of people's bodies (e.g., "curvy," "long-legged") can elicit vivid mental images. The ease with which these mental images are created belies the complexity of three-dimensional body shapes. We explored the relationship between body shapes and body descriptions and showed that a small number of words can be used to generate categorically accurate representations of three-dimensional bodies. The dimensions of body-shape variation that emerged in a language-based similarity space were related to major dimensions of variation computed directly from three-dimensional laser scans of 2,094 bodies. This relationship allowed us to generate three-dimensional models of people in the shape space using only their coordinates on analogous dimensions in the language-based description space. Human descriptions of photographed bodies and their corresponding models matched closely. The natural mapping between the spaces illustrates the role of language as a concise code for body shape that captures perceptually salient global and local body features. © The Author(s) 2016.

  9. Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.

    PubMed

    Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K

    2016-08-01

    The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.

  10. Analytical Model For Fluid Dynamics In A Microgravity Environment

    NASA Technical Reports Server (NTRS)

    Naumann, Robert J.

    1995-01-01

    Report presents analytical approximation methodology for providing coupled fluid-flow, heat, and mass-transfer equations in microgravity environment. Experimental engineering estimates accurate to within factor of 2 made quickly and easily, eliminating need for time-consuming and costly numerical modeling. Any proposed experiment reviewed to see how it would perform in microgravity environment. Model applied in commercial setting for preliminary design of low-Grashoff/Rayleigh-number experiments.

  11. Force 2025 and Beyond Strategic Force Design Analytic Model

    DTIC Science & Technology

    2017-01-12

    depiction of the core ideas of our force design model. Figure 1: Description of Force Design Model Figure 2 shows an overview of our methodology ...the F2025B Force Design Analytic Model research conducted by TRAC- MTRY and the Naval Postgraduate School. Our research develops a methodology for...designs. We describe a data development methodology that characterizes the data required to construct a force design model using our approach. We

  12. Health Informatics for Neonatal Intensive Care Units: An Analytical Modeling Perspective

    PubMed Central

    Mench-Bressan, Nadja; McGregor, Carolyn; Pugh, James Edward

    2015-01-01

    The effective use of data within intensive care units (ICUs) has great potential to create new cloud-based health analytics solutions for disease prevention or earlier condition onset detection. The Artemis project aims to achieve the above goals in the area of neonatal ICUs (NICU). In this paper, we proposed an analytical model for the Artemis cloud project which will be deployed at McMaster Children’s Hospital in Hamilton. We collect not only physiological data but also the infusion pumps data that are attached to NICU beds. Using the proposed analytical model, we predict the amount of storage, memory, and computation power required for the system. Capacity planning and tradeoff analysis would be more accurate and systematic by applying the proposed analytical model in this paper. Numerical results are obtained using real inputs acquired from McMaster Children’s Hospital and a pilot deployment of the system at The Hospital for Sick Children (SickKids) in Toronto. PMID:27170907

  13. Continuous Metabolic Monitoring Based on Multi-Analyte Biomarkers to Predict Exhaustion

    PubMed Central

    Kastellorizios, Michail; Burgess, Diane J.

    2015-01-01

    This work introduces the concept of multi-analyte biomarkers for continuous metabolic monitoring. The importance of using more than one marker lies in the ability to obtain a holistic understanding of the metabolism. This is showcased for the detection and prediction of exhaustion during intense physical exercise. The findings presented here indicate that when glucose and lactate changes over time are combined into multi-analyte biomarkers, their monitoring trends are more sensitive in the subcutaneous tissue, an implantation-friendly peripheral tissue, compared to the blood. This unexpected observation was confirmed in normal as well as type 1 diabetic rats. This study was designed to be of direct value to continuous monitoring biosensor research, where single analytes are typically monitored. These findings can be implemented in new multi-analyte continuous monitoring technologies for more accurate insulin dosing, as well as for exhaustion prediction studies based on objective data rather than the subject’s perception. PMID:26028477

  14. Continuous metabolic monitoring based on multi-analyte biomarkers to predict exhaustion.

    PubMed

    Kastellorizios, Michail; Burgess, Diane J

    2015-06-01

    This work introduces the concept of multi-analyte biomarkers for continuous metabolic monitoring. The importance of using more than one marker lies in the ability to obtain a holistic understanding of the metabolism. This is showcased for the detection and prediction of exhaustion during intense physical exercise. The findings presented here indicate that when glucose and lactate changes over time are combined into multi-analyte biomarkers, their monitoring trends are more sensitive in the subcutaneous tissue, an implantation-friendly peripheral tissue, compared to the blood. This unexpected observation was confirmed in normal as well as type 1 diabetic rats. This study was designed to be of direct value to continuous monitoring biosensor research, where single analytes are typically monitored. These findings can be implemented in new multi-analyte continuous monitoring technologies for more accurate insulin dosing, as well as for exhaustion prediction studies based on objective data rather than the subject's perception.

  15. Assessment of the analytical capabilities of inductively coupled plasma-mass spectrometry

    USGS Publications Warehouse

    Taylor, Howard E.; Garbarino, John R.

    1988-01-01

    A thorough assessment of the analytical capabilities of inductively coupled plasma-mass spectrometry was conducted for selected analytes of importance in water quality applications and hydrologic research. A multielement calibration curve technique was designed to produce accurate and precise results in analysis times of approximately one minute. The suite of elements included Al, As, B, Ba, Be, Cd, Co, Cr, Cu, Hg, Li, Mn, Mo, Ni, Pb, Se, Sr, V, and Zn. The effects of sample matrix composition on the accuracy of the determinations showed that matrix elements (such as Na, Ca, Mg, and K) that may be present in natural water samples at concentration levels greater than 50 mg/L resulted in as much as a 10% suppression in ion current for analyte elements. Operational detection limits are presented.

  16. Two-dimensional analytic weighting functions for limb scattering

    NASA Astrophysics Data System (ADS)

    Zawada, D. J.; Bourassa, A. E.; Degenstein, D. A.

    2017-10-01

    Through the inversion of limb scatter measurements it is possible to obtain vertical profiles of trace species in the atmosphere. Many of these inversion methods require what is often referred to as weighting functions, or derivatives of the radiance with respect to concentrations of trace species in the atmosphere. Several radiative transfer models have implemented analytic methods to calculate weighting functions, alleviating the computational burden of traditional numerical perturbation methods. Here we describe the implementation of analytic two-dimensional weighting functions, where derivatives are calculated relative to atmospheric constituents in a two-dimensional grid of altitude and angle along the line of sight direction, in the SASKTRAN-HR radiative transfer model. Two-dimensional weighting functions are required for two-dimensional inversions of limb scatter measurements. Examples are presented where the analytic two-dimensional weighting functions are calculated with an underlying one-dimensional atmosphere. It is shown that the analytic weighting functions are more accurate than ones calculated with a single scatter approximation, and are orders of magnitude faster than a typical perturbation method. Evidence is presented that weighting functions for stratospheric aerosols calculated under a single scatter approximation may not be suitable for use in retrieval algorithms under solar backscatter conditions.

  17. Modeling Biodegradation and Reactive Transport: Analytical and Numerical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Y; Glascoe, L

    The computational modeling of the biodegradation of contaminated groundwater systems accounting for biochemical reactions coupled to contaminant transport is a valuable tool for both the field engineer/planner with limited computational resources and the expert computational researcher less constrained by time and computer power. There exists several analytical and numerical computer models that have been and are being developed to cover the practical needs put forth by users to fulfill this spectrum of computational demands. Generally, analytical models provide rapid and convenient screening tools running on very limited computational power, while numerical models can provide more detailed information with consequent requirementsmore » of greater computational time and effort. While these analytical and numerical computer models can provide accurate and adequate information to produce defensible remediation strategies, decisions based on inadequate modeling output or on over-analysis can have costly and risky consequences. In this chapter we consider both analytical and numerical modeling approaches to biodegradation and reactive transport. Both approaches are discussed and analyzed in terms of achieving bioremediation goals, recognizing that there is always a tradeoff between computational cost and the resolution of simulated systems.« less

  18. Semi-Analytic Reconstruction of Flux in Finite Volume Formulations

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2006-01-01

    Semi-analytic reconstruction uses the analytic solution to a second-order, steady, ordinary differential equation (ODE) to simultaneously evaluate the convective and diffusive flux at all interfaces of a finite volume formulation. The second-order ODE is itself a linearized approximation to the governing first- and second- order partial differential equation conservation laws. Thus, semi-analytic reconstruction defines a family of formulations for finite volume interface fluxes using analytic solutions to approximating equations. Limiters are not applied in a conventional sense; rather, diffusivity is adjusted in the vicinity of changes in sign of eigenvalues in order to achieve a sufficiently small cell Reynolds number in the analytic formulation across critical points. Several approaches for application of semi-analytic reconstruction for the solution of one-dimensional scalar equations are introduced. Results are compared with exact analytic solutions to Burger s Equation as well as a conventional, upwind discretization using Roe s method. One approach, the end-point wave speed (EPWS) approximation, is further developed for more complex applications. One-dimensional vector equations are tested on a quasi one-dimensional nozzle application. The EPWS algorithm has a more compact difference stencil than Roe s algorithm but reconstruction time is approximately a factor of four larger than for Roe. Though both are second-order accurate schemes, Roe s method approaches a grid converged solution with fewer grid points. Reconstruction of flux in the context of multi-dimensional, vector conservation laws including effects of thermochemical nonequilibrium in the Navier-Stokes equations is developed.

  19. Relationship Between Job Characteristics and Organizational Commitment: A Descriptive Analytical Study.

    PubMed

    Faraji, Obeidollah; Ramazani, Abbas Ali; Hedaiati, Pouria; Aliabadi, Ali; Elhamirad, Samira; Valiee, Sina

    2015-11-01

    Many factors influence the organizational commitment of employees. One of these factors is job designing since it affects the attitude, beliefs, and feelings of the organization employees. We aimed to determine the relationship between job characteristics and organizational commitment among the employees of hospitals. In this descriptive and correlational study, 152 Iranian employees of the hospitals (physicians, nurses, and administrative staff) were selected through stratified random sampling. Data gathered using 3-part questionnaire of "demographic information", "job characteristics model," and "organizational commitment," in 2011. Study data were analyzed using SPSS v. 16. There was significant statistical correlation between organizational commitment and variables of educational level (P = 0.001) and job category (P = 0.001). Also, a direct and significant correlation existed between motivating potential score and job feedback on one hand and organizational commitment on the other hand (P = 0.014). According to the results, managers of the hospitals should increase staff's commitment through paying attention to proper job designing.

  20. Analytical model for the density distribution in the Io plasma torus

    NASA Technical Reports Server (NTRS)

    Mei, YI; Thorne, Richard M.; Bagenal, Fran

    1995-01-01

    An analytical model is developed for the diffusive equilibrium plasma density distribution in the Io plasma torus. The model has been employed successfully to follow the ray path of plasma waves in the multi-ion Jovian magnetosphere; it would also be valuable for other studies of the Io torus that require a smooth and continuous description of the plasma density and its gradients. Validity of the analytical treatment requires that the temperature of thermal electrons be much lower than the ion temperature and that superthermal electrons be much less abundant than the thermal electrons; these two conditions are satisfied in the warm outer region of the Io torus from L = 6 to L = 10. The analytical solutions agree well with exact numerical calculations for the most dense portion of the Io torus within 30 deg of the equator.

  1. Validating Analytical Protocols to Determine Selected Pesticides and PCBs Using Routine Samples.

    PubMed

    Pindado Jiménez, Oscar; García Alonso, Susana; Pérez Pastor, Rosa María

    2017-01-01

    This study aims at providing recommendations concerning the validation of analytical protocols by using routine samples. It is intended to provide a case-study on how to validate the analytical methods in different environmental matrices. In order to analyze the selected compounds (pesticides and polychlorinated biphenyls) in two different environmental matrices, the current work has performed and validated two analytical procedures by GC-MS. A description is given of the validation of the two protocols by the analysis of more than 30 samples of water and sediments collected along nine months. The present work also scopes the uncertainty associated with both analytical protocols. In detail, uncertainty of water sample was performed through a conventional approach. However, for the sediments matrices, the estimation of proportional/constant bias is also included due to its inhomogeneity. Results for the sediment matrix are reliable, showing a range 25-35% of analytical variability associated with intermediate conditions. The analytical methodology for the water matrix determines the selected compounds with acceptable recoveries and the combined uncertainty ranges between 20 and 30%. Analyzing routine samples is rarely applied to assess trueness of novel analytical methods and up to now this methodology was not focused on organochlorine compounds in environmental matrices.

  2. Accurate interatomic force fields via machine learning with covariant kernels

    NASA Astrophysics Data System (ADS)

    Glielmo, Aldo; Sollich, Peter; De Vita, Alessandro

    2017-06-01

    We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian process (GP) regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group SO (d ) for the relevant dimensionality d . Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe, and Si crystalline systems.

  3. Mapping the Diversity among Runaways: A Descriptive Multivariate Analysis of Selected Social Psychological Background Conditions.

    ERIC Educational Resources Information Center

    Brennan, Tim

    1980-01-01

    A review of prior classification systems of runaways is followed by a descriptive taxonomy of runaways developed using cluster-analytic methods. The empirical types illustrate patterns of weakness in bonds between runaways and families, schools, or peer relationships. (Author)

  4. Analytical and Clinical Performance of Blood Glucose Monitors

    PubMed Central

    Boren, Suzanne Austin; Clarke, William L.

    2010-01-01

    Background The objective of this study was to understand the level of performance of blood glucose monitors as assessed in the published literature. Methods Medline from January 2000 to October 2009 and reference lists of included articles were searched to identify eligible studies. Key information was abstracted from eligible studies: blood glucose meters tested, blood sample, meter operators, setting, sample of people (number, diabetes type, age, sex, and race), duration of diabetes, years using a glucose meter, insulin use, recommendations followed, performance evaluation measures, and specific factors affecting the accuracy evaluation of blood glucose monitors. Results Thirty-one articles were included in this review. Articles were categorized as review articles of blood glucose accuracy (6 articles), original studies that reported the performance of blood glucose meters in laboratory settings (14 articles) or clinical settings (9 articles), and simulation studies (2 articles). A variety of performance evaluation measures were used in the studies. The authors did not identify any studies that demonstrated a difference in clinical outcomes. Examples of analytical tools used in the description of accuracy (e.g., correlation coefficient, linear regression equations, and International Organization for Standardization standards) and how these traditional measures can complicate the achievement of target blood glucose levels for the patient were presented. The benefits of using error grid analysis to quantify the clinical accuracy of patient-determined blood glucose values were discussed. Conclusions When examining blood glucose monitor performance in the real world, it is important to consider if an improvement in analytical accuracy would lead to improved clinical outcomes for patients. There are several examples of how analytical tools used in the description of self-monitoring of blood glucose accuracy could be irrelevant to treatment decisions. PMID:20167171

  5. Applications of Optical Microcavity Resonators in Analytical Chemistry

    PubMed Central

    Wade, James H.; Bailey, Ryan C.

    2018-01-01

    Optical resonator sensors are an emerging class of analytical technologies that use recirculating light confined within a microcavity to sensitively measure the surrounding environment. Bolstered by advances in microfabrication, these devices can be configured for a wide variety of chemical or biomolecular sensing applications. The review begins with a brief description of optical resonator sensor operation followed by discussions regarding sensor design, including different geometries, choices of material systems, methods of sensor interrogation, and new approaches to sensor operation. Throughout, key recent developments are highlighted, including advancements in biosensing and other applications of optical sensors. Alternative sensing mechanisms and hybrid sensing devices are then discussed in terms of their potential for more sensitive and rapid analyses. Brief concluding statements offer our perspective on the future of optical microcavity sensors and their promise as versatile detection elements within analytical chemistry. PMID:27049629

  6. Descriptive Drinking Norms: For Whom Does Reference Group Matter?*

    PubMed Central

    Larimer, Mary E.; Neighbors, Clayton; LaBrie, Joseph W.; Atkins, David C.; Lewis, Melissa A.; Lee, Christine M.; Kilmer, Jason R.; Kaysen, Debra L.; Pedersen, Eric r.; Montoya, Heidi; Hodge, Kimberley; Desai, Sruti; Hummer, Justin F.; Walter, Theresa

    2011-01-01

    Objective: Perceived descriptive drinking norms often differ from actual norms and are positively related to personal consumption. However, it is not clear how normative perceptions vary with specificity of the reference group. Are drinking norms more accurate and more closely related to drinking behavior as reference group specificity increases? Do these relationships vary as a function of participant demographics? The present study examined the relationship between perceived descriptive norms and drinking behavior by ethnicity (Asian or White), sex, and fraternity/sorority status. Method: Participants were 2,699 (58% female) White (75%) or Asian (25%) undergraduates from two universities who reported their own alcohol use and perceived descriptive norms for eight reference groups: "typical student"; same sex, ethnicity, or fraternity/sorority status; and all combinations of these three factors. Results: Participants generally reported the highest perceived norms for the most distal reference group (typical student), with perceptions becoming more accurate as individuals' similarity to the reference group increased. Despite increased accuracy, participants perceived that all reference groups drank more than was actually the case. Across specific subgroups (fraternity/sorority members and men) different patterns emerged. Fraternity/sorority members reliably reported higher estimates of drinking for reference groups that included fraternity/ sorority status, and, to a lesser extent, men reported higher estimates for reference groups that included men. Conclusions: The results suggest that interventions targeting normative misperceptions may need to provide feedback based on participant demography or group membership. Although reference group-specific feedback may be important for some subgroups, typical student feedback provides the largest normative discrepancy for the majority of students. PMID:21906510

  7. Recent trends in analytical procedures in forensic toxicology.

    PubMed

    Van Bocxlaer, Jan F

    2005-12-01

    Forensic toxicology is a very demanding discipline,heavily dependent on good analytical techniques. That is why new trends appear continuously. In the past years. LC-MS has revolutionized target compound analysis and has become the trend, also in toxicology. In LC-MS screening analysis, things are less straightforward and several approaches exist. One promising approach based on accurate LC-MSTOF mass measurements and elemental formula based library searches is discussed. This way of screening has already proven its applicability but at the same time it became obvious that a single accurate mass measurement lacks some specificity when using large compound libraries. CE too is a reemerging approach. The increasingly polar and ionic molecules encountered make it a worthwhile addition to e.g. LC, as illustrated for the analysis of GHB. A third recent trend is the use of MALDI mass spectrometry for small molecules. It is promising for its ease-of-use and high throughput. Unfortunately, re-ports of disappointment but also accomplishment, e.g. the quantitative analysis of LSD as discussed here, alternate, and it remains to be seen whether MALDI really will establish itself. Indeed, not all new trends will prove themselves but the mere fact that many appear in the world of analytical toxicology nowadays is, in itself, encouraging for the future of (forensic) toxicology.

  8. An Analytical State Transition Matrix for Orbits Perturbed by an Oblate Spheroid

    NASA Technical Reports Server (NTRS)

    Mueller, A. C.

    1977-01-01

    An analytical state transition matrix and its inverse, which include the short period and secular effects of the second zonal harmonic, were developed from the nonsingular PS satellite theory. The fact that the independent variable in the PS theory is not time is in no respect disadvantageous, since any explicit analytical solution must be expressed in the true or eccentric anomaly. This is shown to be the case for the simple conic matrix. The PS theory allows for a concise, accurate, and algorithmically simple state transition matrix. The improvement over the conic matrix ranges from 2 to 4 digits accuracy.

  9. Accurate description of charged excitations in molecular solids from embedded many-body perturbation theory

    NASA Astrophysics Data System (ADS)

    Li, Jing; D'Avino, Gabriele; Duchemin, Ivan; Beljonne, David; Blase, Xavier

    2018-01-01

    We present a novel hybrid quantum/classical approach to the calculation of charged excitations in molecular solids based on the many-body Green's function G W formalism. Molecules described at the G W level are embedded into the crystalline environment modeled with an accurate classical polarizable scheme. This allows the calculation of electron addition and removal energies in the bulk and at crystal surfaces where charged excitations are probed in photoelectron experiments. By considering the paradigmatic case of pentacene and perfluoropentacene crystals, we discuss the different contributions from intermolecular interactions to electronic energy levels, distinguishing between polarization, which is accounted for combining quantum and classical polarizabilities, and crystal field effects, that can impact energy levels by up to ±0.6 eV. After introducing band dispersion, we achieve quantitative agreement (within 0.2 eV) on the ionization potential and electron affinity measured at pentacene and perfluoropentacene crystal surfaces characterized by standing molecules.

  10. Analytical transition-matrix treatment of electric multipole polarizabilities of hydrogen-like atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kharchenko, V.F., E-mail: vkharchenko@bitp.kiev.ua

    2015-04-15

    The direct transition-matrix approach to the description of the electric polarization of the quantum bound system of particles is used to determine the electric multipole polarizabilities of the hydrogen-like atoms. It is shown that in the case of the bound system formed by the Coulomb interaction the corresponding inhomogeneous integral equation determining an off-shell scattering function, which consistently describes virtual multiple scattering, can be solved exactly analytically for all electric multipole polarizabilities. Our method allows to reproduce the known Dalgarno–Lewis formula for electric multipole polarizabilities of the hydrogen atom in the ground state and can also be applied to determinemore » the polarizability of the atom in excited bound states. - Highlights: • A new description for electric polarization of hydrogen-like atoms. • Expression for multipole polarizabilities in terms of off-shell scattering functions. • Derivation of integral equation determining the off-shell scattering function. • Rigorous analytic solving the integral equations both for ground and excited states. • Study of contributions of virtual multiple scattering to electric polarizabilities.« less

  11. Accurate Identification of MCI Patients via Enriched White-Matter Connectivity Network

    NASA Astrophysics Data System (ADS)

    Wee, Chong-Yaw; Yap, Pew-Thian; Brownyke, Jeffery N.; Potter, Guy G.; Steffens, David C.; Welsh-Bohmer, Kathleen; Wang, Lihong; Shen, Dinggang

    Mild cognitive impairment (MCI), often a prodromal phase of Alzheimer's disease (AD), is frequently considered to be a good target for early diagnosis and therapeutic interventions of AD. Recent emergence of reliable network characterization techniques have made understanding neurological disorders at a whole brain connectivity level possible. Accordingly, we propose a network-based multivariate classification algorithm, using a collection of measures derived from white-matter (WM) connectivity networks, to accurately identify MCI patients from normal controls. An enriched description of WM connections, utilizing six physiological parameters, i.e., fiber penetration count, fractional anisotropy (FA), mean diffusivity (MD), and principal diffusivities (λ 1, λ 2, λ 3), results in six connectivity networks for each subject to account for the connection topology and the biophysical properties of the connections. Upon parcellating the brain into 90 regions-of-interest (ROIs), the average statistics of each ROI in relation to the remaining ROIs are extracted as features for classification. These features are then sieved to select the most discriminant subset of features for building an MCI classifier via support vector machines (SVMs). Cross-validation results indicate better diagnostic power of the proposed enriched WM connection description than simple description with any single physiological parameter.

  12. Relationship Between Job Characteristics and Organizational Commitment: A Descriptive Analytical Study

    PubMed Central

    Faraji, Obeidollah; Ramazani, Abbas Ali; Hedaiati, Pouria; Aliabadi, Ali; Elhamirad, Samira; Valiee, Sina

    2015-01-01

    Background: Many factors influence the organizational commitment of employees. One of these factors is job designing since it affects the attitude, beliefs, and feelings of the organization employees. Objectives: We aimed to determine the relationship between job characteristics and organizational commitment among the employees of hospitals. Patients and Methods: In this descriptive and correlational study, 152 Iranian employees of the hospitals (physicians, nurses, and administrative staff) were selected through stratified random sampling. Data gathered using 3-part questionnaire of “demographic information”, “job characteristics model,” and “organizational commitment,” in 2011. Study data were analyzed using SPSS v. 16. Results: There was significant statistical correlation between organizational commitment and variables of educational level (P = 0.001) and job category (P = 0.001). Also, a direct and significant correlation existed between motivating potential score and job feedback on one hand and organizational commitment on the other hand (P = 0.014). Conclusions: According to the results, managers of the hospitals should increase staff’s commitment through paying attention to proper job designing. PMID:26734472

  13. An analytical model for regular respiratory signals derived from the probability density function of Rayleigh distribution.

    PubMed

    Li, Xin; Li, Ye

    2015-01-01

    Regular respiratory signals (RRSs) acquired with physiological sensing systems (e.g., the life-detection radar system) can be used to locate survivors trapped in debris in disaster rescue, or predict the breathing motion to allow beam delivery under free breathing conditions in external beam radiotherapy. Among the existing analytical models for RRSs, the harmonic-based random model (HRM) is shown to be the most accurate, which, however, is found to be subject to considerable error if the RRS has a slowly descending end-of-exhale (EOE) phase. The defect of the HRM motivates us to construct a more accurate analytical model for the RRS. In this paper, we derive a new analytical RRS model from the probability density function of Rayleigh distribution. We evaluate the derived RRS model by using it to fit a real-life RRS in the sense of least squares, and the evaluation result shows that, our presented model exhibits lower error and fits the slowly descending EOE phases of the real-life RRS better than the HRM.

  14. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations.

    PubMed

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-15

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  15. Comparison of Cluster, Slab, and Analytic Potential Models for the Dimethyl Methylphosphonate (DMMP)/TiO2 (110) Intermolecular Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Li; Tunega, Daniel; Xu, Lai

    2013-08-29

    In a previous study (J. Phys. Chem. C 2011, 115, 12403) cluster models for the TiO2 rutile (110) surface and MP2 calculations were used to develop an analytic potential energy function for dimethyl methylphosphonate (DMMP) interacting with this surface. In the work presented here, this analytic potential and MP2 cluster models are compared with DFT "slab" calculations for DMMP interacting with the TiO2 (110) surface and with DFT cluster models for the TiO2 (110) surface. The DFT slab calculations were performed with the PW91 and PBE functionals. The analytic potential gives DMMP/ TiO2 (110) potential energy curves in excellent agreementmore » with those obtained from the slab calculations. The cluster models for the TiO2 (110) surface, used for the MP2 calculations, were extended to DFT calculations with the B3LYP, PW91, and PBE functional. These DFT calculations do not give DMMP/TiO2 (110) interaction energies which agree with those from the DFT slab calculations. Analyses of the wave functions for these cluster models show that they do not accurately represent the HOMO and LUMO for the surface, which should be 2p and 3d orbitals, respectively, and the models also do not give an accurate band gap. The MP2 cluster models do not accurately represent the LUMO and that they give accurate DMMP/TiO2 (110) interaction energies is apparently fortuitous, arising from their highly inaccurate band gaps. Accurate cluster models, consisting of 7, 10, and 15 Ti-atoms and which have the correct HOMO and LUMO properties, are proposed. The work presented here illustrates the care that must be taken in "constructing" cluster models which accurately model surfaces.« less

  16. Analytical, Numerical, and Experimental Results on Turbulent Boundary Layers

    DTIC Science & Technology

    1976-07-01

    a pitot pressure rake where the spacing between probe centers was 0.5 in. near the wall and 1.0 in. away from the wall. Recently, measurements have...Pressure Gradient, Part II. Analysis- of the Experimental Data." BRL R 1543, June 1971. 51. Allen, J. M. " Pitot -Probe Displacement in a Supersonic Turbulent...numbers; (4) a description of the data reduction of pitot pressure measurements utilizing these analytical results in order to obtain velocity

  17. A non-grey analytical model for irradiated atmospheres. II. Analytical vs. numerical solutions

    NASA Astrophysics Data System (ADS)

    Parmentier, Vivien; Guillot, Tristan; Fortney, Jonathan J.; Marley, Mark S.

    2015-02-01

    Context. The recent discovery and characterization of the diversity of the atmospheres of exoplanets and brown dwarfs calls for the development of fast and accurate analytical models. Aims: We wish to assess the goodness of the different approximations used to solve the radiative transfer problem in irradiated atmospheres analytically, and we aim to provide a useful tool for a fast computation of analytical temperature profiles that remains correct over a wide range of atmospheric characteristics. Methods: We quantify the accuracy of the analytical solution derived in paper I for an irradiated, non-grey atmosphere by comparing it to a state-of-the-art radiative transfer model. Then, using a grid of numerical models, we calibrate the different coefficients of our analytical model for irradiated solar-composition atmospheres of giant exoplanets and brown dwarfs. Results: We show that the so-called Eddington approximation used to solve the angular dependency of the radiation field leads to relative errors of up to ~5% on the temperature profile. For grey or semi-grey atmospheres (i.e., when the visible and thermal opacities, respectively, can be considered independent of wavelength), we show that the presence of a convective zone has a limited effect on the radiative atmosphere above it and leads to modifications of the radiative temperature profile of approximately ~2%. However, for realistic non-grey planetary atmospheres, the presence of a convective zone that extends to optical depths smaller than unity can lead to changes in the radiative temperature profile on the order of 20% or more. When the convective zone is located at deeper levels (such as for strongly irradiated hot Jupiters), its effect on the radiative atmosphere is again on the same order (~2%) as in the semi-grey case. We show that the temperature inversion induced by a strong absorber in the optical, such as TiO or VO is mainly due to non-grey thermal effects reducing the ability of the upper

  18. Analytical and finite element simulation of a three-bar torsion spring

    NASA Astrophysics Data System (ADS)

    Rădoi, M.; Cicone, T.

    2016-08-01

    The present study is dedicated to the innovative 3-bar torsion spring used as suspension solution for the first time at Lunokhod-1, the first autonomous vehicle sent for the exploration of the Moon in the early 70-ies by the former USSR. The paper describes a simple analytical model for calculation of spring static characteristics, taking into account both torsion and bending effects. Closed form solutions of this model allows quick and elegant parametric analysis. A comparison with a single torsion bar with the same stiffness reveal an increase of the maximum stress with more than 50%. A 3D finite element (FE) simulation is proposed to evaluate the accuracy of the analytical model. The model was meshed in an automated pattern (sweep for hubs and tetrahedrons for bars) with mesh morphing. Very close results between analytical and numerical solutions have been found, concluding that the analytical model is accurate. The 3-D finite element simulation was used to evaluate the effects of design details like fillet radius of the bars or contact stresses in the hex hub.

  19. Incorporating photon recycling into the analytical drift-diffusion model of high efficiency solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumb, Matthew P.; Naval Research Laboratory, Washington, DC 20375; Steiner, Myles A.

    The analytical drift-diffusion formalism is able to accurately simulate a wide range of solar cell architectures and was recently extended to include those with back surface reflectors. However, as solar cells approach the limits of material quality, photon recycling effects become increasingly important in predicting the behavior of these cells. In particular, the minority carrier diffusion length is significantly affected by the photon recycling, with consequences for the solar cell performance. In this paper, we outline an approach to account for photon recycling in the analytical Hovel model and compare analytical model predictions to GaAs-based experimental devices operating close tomore » the fundamental efficiency limit.« less

  20. Field-driven chiral bubble dynamics analysed by a semi-analytical approach

    NASA Astrophysics Data System (ADS)

    Vandermeulen, J.; Leliaert, J.; Dupré, L.; Van Waeyenberge, B.

    2017-12-01

    Nowadays, field-driven chiral bubble dynamics in the presence of the Dzyaloshinskii-Moriya interaction are a topic of thorough investigation. In this paper, a semi-analytical approach is used to derive equations of motion that express the bubble wall (BW) velocity and the change in in-plane magnetization angle as function of the micromagnetic parameters of the involved interactions, thereby taking into account the two-dimensional nature of the bubble wall. It is demonstrated that the equations of motion enable an accurate description of the expanding and shrinking convex bubble dynamics and an expression for the transition field between shrinkage and expansion is derived. In addition, these equations of motion show that the BW velocity is not only dependent on the driving force, but also on the BW curvature. The absolute BW velocity increases for both a shrinking and an expanding bubble, but for different reasons: for expanding bubbles, it is due to the increasing importance of the driving force, while for shrinking bubbles, it is due to the increasing importance of contributions related to the BW curvature. Finally, using this approach we show how the recently proposed magnetic bubblecade memory can operate in the flow regime in the presence of a tilted sinusoidal magnetic field and at greatly reduced bubble sizes compared to the original device prototype.

  1. Nonlinear analyte concentration gradients for one-step kinetic analysis employing optical microring resonators.

    PubMed

    Marty, Michael T; Sloan, Courtney D Kuhnline; Bailey, Ryan C; Sligar, Stephen G

    2012-07-03

    Conventional methods to probe the binding kinetics of macromolecules at biosensor surfaces employ a stepwise titration of analyte concentrations and measure the association and dissociation to the immobilized ligand at each concentration level. It has previously been shown that kinetic rates can be measured in a single step by monitoring binding as the analyte concentration increases over time in a linear gradient. We report here the application of nonlinear analyte concentration gradients for determining kinetic rates and equilibrium binding affinities in a single experiment. A versatile nonlinear gradient maker is presented, which is easily applied to microfluidic systems. Simulations validate that accurate kinetic rates can be extracted for a wide range of association and dissociation rates, gradient slopes, and curvatures, and with models for mass transport. The nonlinear analyte gradient method is demonstrated with a silicon photonic microring resonator platform to measure prostate specific antigen-antibody binding kinetics.

  2. Nonlinear Analyte Concentration Gradients for One-Step Kinetic Analysis Employing Optical Microring Resonators

    PubMed Central

    Marty, Michael T.; Kuhnline Sloan, Courtney D.; Bailey, Ryan C.; Sligar, Stephen G.

    2012-01-01

    Conventional methods to probe the binding kinetics of macromolecules at biosensor surfaces employ a stepwise titration of analyte concentrations and measure the association and dissociation to the immobilized ligand at each concentration level. It has previously been shown that kinetic rates can be measured in a single step by monitoring binding as the analyte concentration increases over time in a linear gradient. We report here the application of nonlinear analyte concentration gradients for determining kinetic rates and equilibrium binding affinities in a single experiment. A versatile nonlinear gradient maker is presented, which is easily applied to microfluidic systems. Simulations validate that accurate kinetic rates can be extracted for a wide range of association and dissociation rates, gradient slopes and curvatures, and with models for mass transport. The nonlinear analyte gradient method is demonstrated with a silicon photonic microring resonator platform to measure prostate specific antigen-antibody binding kinetics. PMID:22686186

  3. Application of the Rangeland Hydrology and Erosion Model to Ecological Site Descriptions and Management

    USDA-ARS?s Scientific Manuscript database

    The utility of Ecological Site Descriptions (ESDs) and State-and-Transition Models (STMs) concepts in guiding rangeland management hinges on their ability to accurately describe and predict community dynamics and the associated consequences. For many rangeland ecosystems, plant community dynamics ar...

  4. Joining X-Ray to Lensing: An Accurate Combined Analysis of MACS J0416.1-2403

    NASA Astrophysics Data System (ADS)

    Bonamigo, M.; Grillo, C.; Ettori, S.; Caminha, G. B.; Rosati, P.; Mercurio, A.; Annunziatella, M.; Balestra, I.; Lombardi, M.

    2017-06-01

    We present a novel approach for a combined analysis of X-ray and gravitational lensing data and apply this technique to the merging galaxy cluster MACS J0416.1-2403. The method exploits the information on the intracluster gas distribution that comes from a fit of the X-ray surface brightness and then includes the hot gas as a fixed mass component in the strong-lensing analysis. With our new technique, we can separate the collisional from the collision-less diffuse mass components, thus obtaining a more accurate reconstruction of the dark matter distribution in the core of a cluster. We introduce an analytical description of the X-ray emission coming from a set of dual pseudo-isothermal elliptical mass distributions, which can be directly used in most lensing softwares. By combining Chandra observations with Hubble Frontier Fields imaging and Multi Unit Spectroscopic Explorer spectroscopy in MACS J0416.1-2403, we measure a projected gas-to-total mass fraction of approximately 10% at 350 kpc from the cluster center. Compared to the results of a more traditional cluster mass model (diffuse halos plus member galaxies), we find a significant difference in the cumulative projected mass profile of the dark matter component and that the dark matter over total mass fraction is almost constant, out to more than 350 kpc. In the coming era of large surveys, these results show the need of multiprobe analyses for detailed dark matter studies in galaxy clusters.

  5. Ram Pressure Stripping Made Easy: An Analytical Approach

    NASA Astrophysics Data System (ADS)

    Köppen, J.; Jáchym, P.; Taylor, R.; Palouš, J.

    2018-06-01

    The removal of gas by ram pressure stripping of galaxies is treated by a purely kinematic description. The solution has two asymptotic limits: if the duration of the ram pressure pulse exceeds the period of vertical oscillations perpendicular to the galactic plane, the commonly used quasi-static criterion of Gunn & Gott is obtained which uses the maximum ram pressure that the galaxy has experienced along its orbit. For shorter pulses the outcome depends on the time-integrated ram pressure. This parameter pair fully describes the gas mass fraction that is stripped from a given galaxy. This approach closely reproduces results from SPH simulations. We show that typical galaxies follow a very tight relation in this parameter space corresponding to a pressure pulse length of about 300 Myr. Thus, the Gunn & Gott criterion provides a good description for galaxies in larger clusters. Applying the analytic description to a sample of 232 Virgo galaxies from the GoldMine database, we show that the ICM provides indeed the ram pressures needed to explain the deficiencies. We also can distinguish current and past strippers, including objects whose stripping state was unknown.

  6. Accurate analytic solution of chemical master equations for gene regulation networks in a single cell

    NASA Astrophysics Data System (ADS)

    Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun

    2018-01-01

    Studying gene regulation networks in a single cell is an important, interesting, and hot research topic of molecular biology. Such process can be described by chemical master equations (CMEs). We propose a Hamilton-Jacobi equation method with finite-size corrections to solve such CMEs accurately at the intermediate region of switching, where switching rate is comparable to fast protein production rate. We applied this approach to a model of self-regulating proteins [H. Ge et al., Phys. Rev. Lett. 114, 078101 (2015), 10.1103/PhysRevLett.114.078101] and found that as a parameter related to inducer concentration increases the probability of protein production changes from unimodal to bimodal, then to unimodal, consistent with phenotype switching observed in a single cell.

  7. Accurate interlaminar stress recovery from finite element analysis

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  8. Taxonomy of Macromotettixoides with the description of a new species (Tetrigidae, Metrodorinae)

    PubMed Central

    Zha, Ling-Sheng; Yu, Feng-Ming; Boonmee, Saranyaphat; Eungwanichayapant, Prapassorn D.; Wen, Ting-Chi

    2017-01-01

    Abstract Descriptions of the flying organs and generic characteristics of the genus Macromotettixoides Zheng, Wei & Jiang are currently imprecise. Macromotettixoides is reviewed and compared with allied genera. A re-description is undertaken and a determination key is provided to Macromotettixoides. Macromotettixoides parvula Zha & Wen, sp. n. from the Guizhou Karst Region, China, is described and illustrated with photographs. Observations on the ecology and habits of the new species are recorded. Four current species of Hyboella Hancock are transferred to Macromotettixoides. Variations of the flying organs and tegminal sinus in the Tetrigidae are discussed, which will help to describe them accurately. PMID:28228664

  9. Analytical prediction of digital signal crosstalk of FCC

    NASA Technical Reports Server (NTRS)

    Belleisle, A. P.

    1972-01-01

    The results are presented of study effort whose aim was the development of accurate means of analyzing and predicting signal cross-talk in multi-wire digital data cables. A complete analytical model is developed n + 1 wire systems of uniform transmission lines with arbitrary linear boundary conditions. In addition, a minimum set of parameter measurements required for the application of the model are presented. Comparisons between cross-talk predicted by this model and actual measured cross-talk are shown for a six conductor ribbon cable.

  10. Integrable perturbed magnetic fields in toroidal geometry: An exact analytical flux surface label for large aspect ratio

    NASA Astrophysics Data System (ADS)

    Kallinikos, N.; Isliker, H.; Vlahos, L.; Meletlidou, E.

    2014-06-01

    An analytical description of magnetic islands is presented for the typical case of a single perturbation mode introduced to tokamak plasma equilibrium in the large aspect ratio approximation. Following the Hamiltonian structure directly in terms of toroidal coordinates, the well known integrability of this system is exploited, laying out a precise and practical way for determining the island topology features, as required in various applications, through an analytical and exact flux surface label.

  11. Integrable perturbed magnetic fields in toroidal geometry: An exact analytical flux surface label for large aspect ratio

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallinikos, N.; Isliker, H.; Vlahos, L.

    2014-06-15

    An analytical description of magnetic islands is presented for the typical case of a single perturbation mode introduced to tokamak plasma equilibrium in the large aspect ratio approximation. Following the Hamiltonian structure directly in terms of toroidal coordinates, the well known integrability of this system is exploited, laying out a precise and practical way for determining the island topology features, as required in various applications, through an analytical and exact flux surface label.

  12. Accurate inspiral-merger-ringdown gravitational waveforms for nonspinning black-hole binaries including the effect of subdominant modes

    NASA Astrophysics Data System (ADS)

    Mehta, Ajit Kumar; Mishra, Chandra Kant; Varma, Vijay; Ajith, Parameswaran

    2017-12-01

    We present an analytical waveform family describing gravitational waves (GWs) from the inspiral, merger, and ringdown of nonspinning black-hole binaries including the effect of several nonquadrupole modes [(ℓ=2 ,m =±1 ),(ℓ=3 ,m =±3 ),(ℓ=4 ,m =±4 ) apart from (ℓ=2 ,m =±2 )]. We first construct spin-weighted spherical harmonics modes of hybrid waveforms by matching numerical-relativity simulations (with mass ratio 1-10) describing the late inspiral, merger, and ringdown of the binary with post-Newtonian/effective-one-body waveforms describing the early inspiral. An analytical waveform family is constructed in frequency domain by modeling the Fourier transform of the hybrid waveforms making use of analytical functions inspired by perturbative calculations. The resulting highly accurate, ready-to-use waveforms are highly faithful (unfaithfulness ≃10-4- 10-2 ) for observation of GWs from nonspinning black-hole binaries and are extremely inexpensive to generate.

  13. Analytical methods for quantitation of prenylated flavonoids from hops.

    PubMed

    Nikolić, Dejan; van Breemen, Richard B

    2013-01-01

    The female flowers of hops ( Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach.

  14. Targeted Analyte Detection by Standard Addition Improves Detection Limits in MALDI Mass Spectrometry

    PubMed Central

    Eshghi, Shadi Toghi; Li, Xingde; Zhang, Hui

    2014-01-01

    Matrix-assisted laser desorption/ionization has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications. PMID:22877355

  15. Fluid dynamics of coarctation of the aorta: analytical solution, in vitro validation and in vivo evaluation

    NASA Astrophysics Data System (ADS)

    Keshavarz-Motamed, Zahra

    2015-11-01

    Coarctation of the aorta (COA) is a congenital heart disease corresponding to a narrowing in the aorta. Cardiac catheterization is considered to be the reference standard for definitive evaluation of COA severity, based on the peak-to-peak trans-coarctation pressure gradient (PtoP TCPG) and instantaneous systolic value of trans-COA pressure gradient (TCPG). However, invasive cardiac catheterization may carry high risks given that undergoing multiple follow-up cardiac catheterizations in patients with COA is common. The objective of this study is to present an analytical description of the COA that estimates PtoP TCPG and TCPG without a need for high risk invasive data collection. Coupled Navier-Stokes and elastic deformation equations were solved analytically to estimate TCPG and PtoP TCPG. The results were validated against data measured in vitro (e.g., 90% COA: TCPG: root mean squared error (RMSE) = 3.93 mmHg; PtoP TCPG: RMSE = 7.9 mmHg). Moreover, the estimated PtoP TCPG resulted from the suggested analytical description was validated using clinical data in twenty patients with COA (maximum RMSE: 8.3 mmHg). Very good correlation and concordance were found between TCPG and PtoP TCPG obtained from the analytical formulation and in vitro and in vivo data. The suggested methodology can be considered as an alternative to cardiac catheterization and can help preventing its risks.

  16. Surrogate matrix and surrogate analyte approaches for definitive quantitation of endogenous biomolecules.

    PubMed

    Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L

    2012-10-01

    Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.

  17. pyJac: Analytical Jacobian generator for chemical kinetics

    NASA Astrophysics Data System (ADS)

    Niemeyer, Kyle E.; Curtis, Nicholas J.; Sung, Chih-Jen

    2017-06-01

    Accurate simulations of combustion phenomena require the use of detailed chemical kinetics in order to capture limit phenomena such as ignition and extinction as well as predict pollutant formation. However, the chemical kinetic models for hydrocarbon fuels of practical interest typically have large numbers of species and reactions and exhibit high levels of mathematical stiffness in the governing differential equations, particularly for larger fuel molecules. In order to integrate the stiff equations governing chemical kinetics, generally reactive-flow simulations rely on implicit algorithms that require frequent Jacobian matrix evaluations. Some in situ and a posteriori computational diagnostics methods also require accurate Jacobian matrices, including computational singular perturbation and chemical explosive mode analysis. Typically, finite differences numerically approximate these, but for larger chemical kinetic models this poses significant computational demands since the number of chemical source term evaluations scales with the square of species count. Furthermore, existing analytical Jacobian tools do not optimize evaluations or support emerging SIMD processors such as GPUs. Here we introduce pyJac, a Python-based open-source program that generates analytical Jacobian matrices for use in chemical kinetics modeling and analysis. In addition to producing the necessary customized source code for evaluating reaction rates (including all modern reaction rate formulations), the chemical source terms, and the Jacobian matrix, pyJac uses an optimized evaluation order to minimize computational and memory operations. As a demonstration, we first establish the correctness of the Jacobian matrices for kinetic models of hydrogen, methane, ethylene, and isopentanol oxidation (number of species ranging 13-360) by showing agreement within 0.001% of matrices obtained via automatic differentiation. We then demonstrate the performance achievable on CPUs and GPUs using py

  18. Mars Analytical Microimager

    NASA Astrophysics Data System (ADS)

    Batory, Krzysztof J.; Govindjee; Andersen, Dale; Presley, John; Lucas, John M.; Sears, S. Kelly; Vali, Hojatollah

    Unambiguous detection of extraterrestrial nitrogenous hydrocarbon microbiology requires an instrument both to recognize potential biogenic specimens and to successfully discriminate them from geochemical settings. Such detection should ideally be in-situ and not jeopardize other experiments by altering samples. Taken individually most biomarkers are inconclusive. For example, since amino acids can be synthesized abiotically they are not always considered reliable biomarkers. An enantiomeric imbalance, which is characteristic of all terrestrial life, may be questioned because chirality can also be altered abiotically. However, current scientific understanding holds that aggregates of identical proteins or proteinaceous complexes, with their well-defined amino acid residue sequences, are indisputable biomarkers. Our paper describes the Mars Analytical Microimager, an instrument for the simultaneous imaging of generic autofluorescent biomarkers and overall morphology. Autofluorescence from ultraviolet to near-infrared is emitted by all known terrestrial biology, and often as consistent complex bands uncharacteristic of abiotic mineral luminescence. The MAM acquires morphology, and even sub-micron morphogenesis, at a 3-centimeter working distance with resolution approaching a laser scanning microscope. Luminescence is simultaneously collected via a 2.5-micron aperture, thereby permitting accurate correlation of multi-dimensional optical behavior with specimen morphology. A variable wavelength excitation source and photospectrometer serve to obtain steady-state and excitation spectra of biotic and luminescent abiotic sources. We believe this is the first time instrumentation for detecting hydrated or desiccated microbiology non-destructively in-situ has been demonstrated. We have obtained excellent preliminary detection of biota and inorganic matrix discrimination from terrestrial polar analogues, and perimetric morphology of individual magnetotactic bacteria. Proposed

  19. Analytic integrable systems: Analytic normalization and embedding flows

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang

    In this paper we mainly study the existence of analytic normalization and the normal form of finite dimensional complete analytic integrable dynamical systems. More details, we will prove that any complete analytic integrable diffeomorphism F(x)=Bx+f(x) in (Cn,0) with B having eigenvalues not modulus 1 and f(x)=O(|) is locally analytically conjugate to its normal form. Meanwhile, we also prove that any complete analytic integrable differential system x˙=Ax+f(x) in (Cn,0) with A having nonzero eigenvalues and f(x)=O(|) is locally analytically conjugate to its normal form. Furthermore we will prove that any complete analytic integrable diffeomorphism defined on an analytic manifold can be embedded in a complete analytic integrable flow. We note that parts of our results are the improvement of Moser's one in J. Moser, The analytic invariants of an area-preserving mapping near a hyperbolic fixed point, Comm. Pure Appl. Math. 9 (1956) 673-692 and of Poincaré's one in H. Poincaré, Sur l'intégration des équations différentielles du premier order et du premier degré, II, Rend. Circ. Mat. Palermo 11 (1897) 193-239. These results also improve the ones in Xiang Zhang, Analytic normalization of analytic integrable systems and the embedding flows, J. Differential Equations 244 (2008) 1080-1092 in the sense that the linear part of the systems can be nonhyperbolic, and the one in N.T. Zung, Convergence versus integrability in Poincaré-Dulac normal form, Math. Res. Lett. 9 (2002) 217-228 in the way that our paper presents the concrete expression of the normal form in a restricted case.

  20. Molecular Structures and Momentum Transfer Cross Sections: The Influence of the Analyte Charge Distribution.

    PubMed

    Young, Meggie N; Bleiholder, Christian

    2017-04-01

    Structure elucidation by ion mobility spectrometry-mass spectrometry methods is based on the comparison of an experimentally measured momentum transfer cross-section to cross-sections calculated for model structures. Thus, it is imperative that the calculated cross-section must be accurate. However, it is not fully understood how important it is to accurately model the charge distribution of an analyte ion when calculating momentum transfer cross-sections. Here, we calculate and compare momentum transfer cross-sections for carbon clusters that differ in mass, charge state, and mode of charge distribution, and vary temperature and polarizability of the buffer gas. Our data indicate that the detailed distribution of the ion charge density is intimately linked to the contribution of glancing collisions to the momentum transfer cross-section. The data suggest that analyte ions with molecular mass ~3 kDa or momentum transfer cross-section 400-500 Å 2 would be significantly influenced by the charge distribution in nitrogen buffer gas. Our data further suggest that accurate structure elucidation on the basis of IMS-MS data measured in nitrogen buffer gas must account for the molecular charge distribution even for systems as large as C 960 (~12 kDa) when localized charges are present and/or measurements are conducted under cryogenic temperatures. Finally, our data underscore that accurate structure elucidation is unlikely if ion mobility data recorded in one buffer gas is converted into other buffer gases when electronic properties of the buffer gases differ. Graphical Abstract ᅟ.

  1. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  2. Analytical fitting model for rough-surface BRDF.

    PubMed

    Renhorn, Ingmar G E; Boreman, Glenn D

    2008-08-18

    A physics-based model is developed for rough surface BRDF, taking into account angles of incidence and scattering, effective index, surface autocovariance, and correlation length. Shadowing is introduced on surface correlation length and reflectance. Separate terms are included for surface scatter, bulk scatter and retroreflection. Using the FindFit function in Mathematica, the functional form is fitted to BRDF measurements over a wide range of incident angles. The model has fourteen fitting parameters; once these are fixed, the model accurately describes scattering data over two orders of magnitude in BRDF without further adjustment. The resulting analytical model is convenient for numerical computations.

  3. Analytical gradients for MP2, double hybrid functionals, and TD‐DFT with polarizable embedding described by fluctuating charges

    PubMed Central

    Carnimeo, Ivan; Cappelli, Chiara

    2015-01-01

    A polarizable quantum mechanics (QM)/ molecular mechanics (MM) approach recently developed for Hartree–Fock (HF) and Kohn–Sham (KS) methods has been extended to energies and analytical gradients for MP2, double hybrid functionals, and TD‐DFT models, thus allowing the computation of equilibrium structures for excited electronic states together with more accurate results for ground electronic states. After a detailed presentation of the theoretical background and of some implementation details, a number of test cases are analyzed to show that the polarizable embedding model based on fluctuating charges (FQ) is remarkably more accurate than the corresponding electronic embedding based on a fixed charge (FX) description. In particular, a set of electronegativities and hardnesses has been optimized for interactions between QM and FQ regions together with new repulsion–dispersion parameters. After validation of both the numerical implementation and of the new parameters, absorption electronic spectra have been computed for representative model systems including vibronic effects. The results show remarkable agreement with full QM computations and significant improvement with respect to the corresponding FX results. The last part of the article provides some hints about computation of solvatochromic effects on absorption spectra in aqueous solution as a function of the number of FQ water molecules and on the use of FX external shells to improve the convergence of the results. © 2015 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:26399473

  4. School Behind Bars--A Descriptive Overview of Correctional Education in the American Prison System.

    ERIC Educational Resources Information Center

    Syracuse Univ. Research Corp., NY. Policy Inst.

    This report, intended to be a descriptive yet analytical overview of correctional education programs, is organized into six chapters. Chapter one discusses the philosophical aspects (pro and con) of prisoner education. Chapter two traces the history of prisoner education from the roots of its beginning to the present. Chapter three presents the…

  5. Introducing GAMER: A Fast and Accurate Method for Ray-tracing Galaxies Using Procedural Noise

    NASA Astrophysics Data System (ADS)

    Groeneboom, N. E.; Dahle, H.

    2014-03-01

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.

  6. HGVS Recommendations for the Description of Sequence Variants: 2016 Update.

    PubMed

    den Dunnen, Johan T; Dalgleish, Raymond; Maglott, Donna R; Hart, Reece K; Greenblatt, Marc S; McGowan-Jordan, Jean; Roux, Anne-Francoise; Smith, Timothy; Antonarakis, Stylianos E; Taschner, Peter E M

    2016-06-01

    The consistent and unambiguous description of sequence variants is essential to report and exchange information on the analysis of a genome. In particular, DNA diagnostics critically depends on accurate and standardized description and sharing of the variants detected. The sequence variant nomenclature system proposed in 2000 by the Human Genome Variation Society has been widely adopted and has developed into an internationally accepted standard. The recommendations are currently commissioned through a Sequence Variant Description Working Group (SVD-WG) operating under the auspices of three international organizations: the Human Genome Variation Society (HGVS), the Human Variome Project (HVP), and the Human Genome Organization (HUGO). Requests for modifications and extensions go through the SVD-WG following a standard procedure including a community consultation step. Version numbers are assigned to the nomenclature system to allow users to specify the version used in their variant descriptions. Here, we present the current recommendations, HGVS version 15.11, and briefly summarize the changes that were made since the 2000 publication. Most focus has been on removing inconsistencies and tightening definitions allowing automatic data processing. An extensive version of the recommendations is available online, at http://www.HGVS.org/varnomen. © 2016 WILEY PERIODICALS, INC.

  7. Concurrence of big data analytics and healthcare: A systematic review.

    PubMed

    Mehta, Nishita; Pandit, Anil

    2018-06-01

    The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of

  8. A pairwise maximum entropy model accurately describes resting-state human brain networks

    PubMed Central

    Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki

    2013-01-01

    The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410

  9. Guided-inquiry laboratory experiments to improve students' analytical thinking skills

    NASA Astrophysics Data System (ADS)

    Wahyuni, Tutik S.; Analita, Rizki N.

    2017-12-01

    This study aims to improve the experiment implementation quality and analytical thinking skills of undergraduate students through guided-inquiry laboratory experiments. This study was a classroom action research conducted in three cycles. The study has been carried out with 38 undergraduate students of the second semester of Biology Education Department of State Islamic Institute (SII) of Tulungagung, as a part of Chemistry for Biology course. The research instruments were lesson plans, learning observation sheets and undergraduate students' experimental procedure. Research data were analyzed using quantitative-descriptive method. The increasing of analytical thinking skills could be measured using gain score normalized and statistical paired t-test. The results showed that guided-inquiry laboratory experiments model was able to improve both the experiment implementation quality and the analytical thinking skills. N-gain score of the analytical thinking skills was increased, in spite of just 0.03 with low increase category, indicated by experimental reports. Some of undergraduate students have had the difficulties in detecting the relation of one part to another and to an overall structure. The findings suggested that giving feedback the procedural knowledge and experimental reports were important. Revising the experimental procedure that completed by some scaffolding questions were also needed.

  10. MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow

    NASA Astrophysics Data System (ADS)

    Samani, N.; Kompani-Zare, M.; Barry, D. A.

    2004-01-01

    Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.

  11. Equations for description of nonlinear standing waves in constant-cross-sectioned resonators.

    PubMed

    Bednarik, Michal; Cervenka, Milan

    2014-03-01

    This work is focused on investigation of applicability of two widely used model equations for description of nonlinear standing waves in constant-cross-sectioned resonators. The investigation is based on the comparison of numerical solutions of these model equations with solutions of more accurate model equations whose validity has been verified experimentally in a number of published papers.

  12. Improved Detection System Description and New Method for Accurate Calibration of Micro-Channel Plate Based Instruments and Its Use in the Fast Plasma Investigation on NASA's Magnetospheric MultiScale Mission

    NASA Technical Reports Server (NTRS)

    Gliese, U.; Avanov, L. A.; Barrie, A. C.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Gershman, D. J.; Dorelli, J. C.; hide

    2015-01-01

    system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. More precise calibration is highly desirable as the instruments will produce higher quality raw data that will require less post-acquisition data correction using results from in-flight pitch angle distribution measurements and ground calibration measurements. The detection system description and the fundamental concepts of this new calibration method, named threshold scan, will be presented. It will be shown how to derive all the individual detection system parameters and how to choose the optimum detection system operating point. This new method has been successfully applied to achieve a highly accurate calibration of the DESs and DISs of the MMS mission. The practical application of the method will be presented together with the achieved calibration results and their significance. Finally, it will be shown that, with further detailed modeling, this method can be extended for use in flight to achieve and maintain a highly accurate detection system calibration across a large number of instruments during the mission.

  13. Effects of a Training Package to Improve the Accuracy of Descriptive Analysis Data Recording

    ERIC Educational Resources Information Center

    Mayer, Kimberly L.; DiGennaro Reed, Florence D.

    2013-01-01

    Functional behavior assessment is an important precursor to developing interventions to address a problem behavior. Descriptive analysis, a type of functional behavior assessment, is effective in informing intervention design only if the gathered data accurately capture relevant events and behaviors. We investigated a training procedure to improve…

  14. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  15. Analytic model for the long-term evolution of circular Earth satellite orbits including lunar node regression

    NASA Astrophysics Data System (ADS)

    Zhu, Ting-Lei; Zhao, Chang-Yin; Zhang, Ming-Jiang

    2017-04-01

    This paper aims to obtain an analytic approximation to the evolution of circular orbits governed by the Earth's J2 and the luni-solar gravitational perturbations. Assuming that the lunar orbital plane coincides with the ecliptic plane, Allan and Cook (Proc. R. Soc. A, Math. Phys. Eng. Sci. 280(1380):97, 1964) derived an analytic solution to the orbital plane evolution of circular orbits. Using their result as an intermediate solution, we establish an approximate analytic model with lunar orbital inclination and its node regression be taken into account. Finally, an approximate analytic expression is derived, which is accurate compared to the numerical results except for the resonant cases when the period of the reference orbit approximately equals the integer multiples (especially 1 or 2 times) of lunar node regression period.

  16. Wideband analytical equivalent circuit for one-dimensional periodic stacked arrays.

    PubMed

    Molero, Carlos; Rodríguez-Berral, Raúl; Mesa, Francisco; Medina, Francisco; Yakovlev, Alexander B

    2016-01-01

    A wideband equivalent circuit is proposed for the accurate analysis of scattering from a set of stacked slit gratings illuminated by a plane wave with transverse magnetic or electric polarization that impinges normally or obliquely along one of the principal planes of the structure. The slit gratings are printed on dielectric slabs of arbitrary thickness, including the case of closely spaced gratings that interact by higher-order modes. A Π-circuit topology is obtained for a pair of coupled arrays, with fully analytical expressions for all the circuit elements. This equivalent Π circuit is employed as the basis to derive the equivalent circuit of finite stacks with any given number of gratings. Analytical expressions for the Brillouin diagram and the Bloch impedance are also obtained for infinite periodic stacks.

  17. Analytic double product integrals for all-frequency relighting.

    PubMed

    Wang, Rui; Pan, Minghao; Chen, Weifeng; Ren, Zhong; Zhou, Kun; Hua, Wei; Bao, Hujun

    2013-07-01

    This paper presents a new technique for real-time relighting of static scenes with all-frequency shadows from complex lighting and highly specular reflections from spatially varying BRDFs. The key idea is to depict the boundaries of visible regions using piecewise linear functions, and convert the shading computation into double product integrals—the integral of the product of lighting and BRDF on visible regions. By representing lighting and BRDF with spherical Gaussians and approximating their product using Legendre polynomials locally in visible regions, we show that such double product integrals can be evaluated in an analytic form. Given the precomputed visibility, our technique computes the visibility boundaries on the fly at each shading point, and performs the analytic integral to evaluate the shading color. The result is a real-time all-frequency relighting technique for static scenes with dynamic, spatially varying BRDFs, which can generate more accurate shadows than the state-of-the-art real-time PRT methods.

  18. How directions of route descriptions influence orientation specificity: the contribution of spatial abilities.

    PubMed

    Meneghetti, Chiara; Muffato, Veronica; Varotto, Diego; De Beni, Rossana

    2017-03-01

    Previous studies found mental representations of route descriptions north-up oriented when egocentric experience (given by the protagonist's initial view) was congruent with the global reference system. This study examines: (a) the development and maintenance of representations derived from descriptions when the egocentric and global reference systems are congruent or incongruent; and (b) how spatial abilities modulate these representations. Sixty participants (in two groups of 30) heard route descriptions of a protagonist's moves starting from the bottom of a layout and headed mainly northwards (SN description) in one group, and headed south from the top (NS description, the egocentric view facing in the opposite direction to the canonical north) in the other. Description recall was tested with map drawing (after hearing the description a first and second time; i.e. Time 1 and 2) and South-North (SN) or North-South (NS) pointing tasks; and spatial objective tasks were administered. The results showed that: (a) the drawings were more rotated in NS than in SN descriptions, and performed better at Time 2 than at Time 1 for both types of description; SN pointing was more accurate than NS pointing for the SN description, while SN and NS pointing accuracy did not differ for the NS description; (b) spatial (rotation) abilities were related to recall accuracy for both types of description, but were more so for the NS ones. Overall, our results showed that the way in which spatial information is conveyed (with/without congruence between the egocentric and global reference systems) and spatial abilities influence the development and maintenance of mental representations.

  19. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control.

    PubMed

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-02-08

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant's intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms.

  20. Lessons learned from the study of masturbation and its comorbidity with psychiatric disorders in children: The first analytic study.

    PubMed

    Tashakori, Ashraf; Safavi, Atefeh; Neamatpour, Sorour

    2017-04-01

    The main source of information about children's masturbation is more on the basis of case reports. Due to the lack of consistent and accurate information. This study aimed to determine prevalence and underlying factors of masturbation and its comorbidity with psychiatric disorders in children. In this descriptive-analytical study, among the children referred to the Pediatrics Clinic of Psychiatric Ward, Golestan Hospital, Ahvaz, Southwest Iran, 98 children were selected by convenience sampling in 2014. Disorders were diagnosed by clinical interview based on the fourth edition of the Diagnostic and Statistical Manual for Psychiatric Disorders (DSM-IV) and the Child Symptom Inventory-4 (CSI-4). We also used a questionnaire, containing demographic information about the patient and their family and also other data. Data was analyzed using descriptive statistics and chi-square test with SPSS software version 16. Of the children who participated in this study (most of whom were boys), 31.6% suffered from masturbation. The phobias (p=0.002), separation anxiety disorder (p=0.044), generalized anxiety disorder (p=0.037), motor tics (p=0.033), stress disorder (p=0.005), oppositional defiant disorder (p=0.044), thumb sucking (p=0.000) and conduct disorder (p=0.001) were associated with masturbation. Masturbation was common in children referred to psychiatric clinic, and may be more associated with oppositional defiant disorder, or conduct disorder, some anxiety disorders, motor tics and other stereotypical behavior. Authors recommended more probing for psychiatric disorders in children with unusual sexual behavior.

  1. Approximate analytical relationships for linear optimal aeroelastic flight control laws

    NASA Astrophysics Data System (ADS)

    Kassem, Ayman Hamdy

    1998-09-01

    This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.

  2. Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.

    PubMed

    Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D

    2017-01-17

    In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.

  3. Water immersion facility general description, spacecraft design division, crew station branch

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The Water Immersion Facility provides an accurate, safe, neutral buoyancy simulation of zero gravity conditions for development of equipment and procedures, and the training of crews. A detailed description is given of some of the following systems: (1) water tank and support equipment; (2) communications systems; (3) environmental control and liquid cooled garment system (EcS/LCG); (4) closed circuit television system; and (5) medical support system.

  4. Black holes by analytic continuation

    NASA Astrophysics Data System (ADS)

    Amati, D.; Russo, J. G.

    1997-07-01

    In the context of a two-dimensional exactly solvable model, the dynamics of quantum black holes is obtained by analytically continuing the description of the regime where no black hole is formed. The resulting spectrum of outgoing radiation departs from the one predicted by the Hawking model in the region where the outgoing modes arise from the horizon with Planck-order frequencies. This occurs early in the evaporation process, and the resulting physical picture is unconventional. The theory predicts that black holes will only radiate out an energy of Planck mass order, stabilizing after a transitory period. The continuation from a regime without black hole formation-accessible in the 1+1 gravity theory considered-is implicit in an S-matrix approach and suggests in this way a possible solution to the problem of information loss.

  5. Predictive Analytical Model for Isolator Shock-Train Location in a Mach 2.2 Direct-Connect Supersonic Combustion Tunnel

    NASA Astrophysics Data System (ADS)

    Lingren, Joe; Vanstone, Leon; Hashemi, Kelley; Gogineni, Sivaram; Donbar, Jeffrey; Akella, Maruthi; Clemens, Noel

    2016-11-01

    This study develops an analytical model for predicting the leading shock of a shock-train in the constant area isolator section in a Mach 2.2 direct-connect scramjet simulation tunnel. The effective geometry of the isolator is assumed to be a weakly converging duct owing to boundary-layer growth. For some given pressure rise across the isolator, quasi-1D equations relating to isentropic or normal shock flows can be used to predict the normal shock location in the isolator. The surface pressure distribution through the isolator was measured during experiments and both the actual and predicted locations can be calculated. Three methods of finding the shock-train location are examined, one based on the measured pressure rise, one using a non-physics-based control model, and one using the physics-based analytical model. It is shown that the analytical model performs better than the non-physics-based model in all cases. The analytic model is less accurate than the pressure threshold method but requires significantly less information to compute. In contrast to other methods for predicting shock-train location, this method is relatively accurate and requires as little as a single pressure measurement. This makes this method potentially useful for unstart control applications.

  6. Carbene footprinting accurately maps binding sites in protein-ligand and protein-protein interactions

    NASA Astrophysics Data System (ADS)

    Manzi, Lucio; Barrow, Andrew S.; Scott, Daniel; Layfield, Robert; Wright, Timothy G.; Moses, John E.; Oldham, Neil J.

    2016-11-01

    Specific interactions between proteins and their binding partners are fundamental to life processes. The ability to detect protein complexes, and map their sites of binding, is crucial to understanding basic biology at the molecular level. Methods that employ sensitive analytical techniques such as mass spectrometry have the potential to provide valuable insights with very little material and on short time scales. Here we present a differential protein footprinting technique employing an efficient photo-activated probe for use with mass spectrometry. Using this methodology the location of a carbohydrate substrate was accurately mapped to the binding cleft of lysozyme, and in a more complex example, the interactions between a 100 kDa, multi-domain deubiquitinating enzyme, USP5 and a diubiquitin substrate were located to different functional domains. The much improved properties of this probe make carbene footprinting a viable method for rapid and accurate identification of protein binding sites utilizing benign, near-UV photoactivation.

  7. Biomarker Surrogates Do Not Accurately Predict Sputum Eosinophils and Neutrophils in Asthma

    PubMed Central

    Hastie, Annette T.; Moore, Wendy C.; Li, Huashi; Rector, Brian M.; Ortega, Victor E.; Pascual, Rodolfo M.; Peters, Stephen P.; Meyers, Deborah A.; Bleecker, Eugene R.

    2013-01-01

    Background Sputum eosinophils (Eos) are a strong predictor of airway inflammation, exacerbations, and aid asthma management, whereas sputum neutrophils (Neu) indicate a different severe asthma phenotype, potentially less responsive to TH2-targeted therapy. Variables such as blood Eos, total IgE, fractional exhaled nitric oxide (FeNO) or FEV1% predicted, may predict airway Eos, while age, FEV1%predicted, or blood Neu may predict sputum Neu. Availability and ease of measurement are useful characteristics, but accuracy in predicting airway Eos and Neu, individually or combined, is not established. Objectives To determine whether blood Eos, FeNO, and IgE accurately predict sputum eosinophils, and age, FEV1% predicted, and blood Neu accurately predict sputum neutrophils (Neu). Methods Subjects in the Wake Forest Severe Asthma Research Program (N=328) were characterized by blood and sputum cells, healthcare utilization, lung function, FeNO, and IgE. Multiple analytical techniques were utilized. Results Despite significant association with sputum Eos, blood Eos, FeNO and total IgE did not accurately predict sputum Eos, and combinations of these variables failed to improve prediction. Age, FEV1%predicted and blood Neu were similarly unsatisfactory for prediction of sputum Neu. Factor analysis and stepwise selection found FeNO, IgE and FEV1% predicted, but not blood Eos, correctly predicted 69% of sputum Eosaccurately assigned only 41% of samples. Conclusion Despite statistically significant associations FeNO, IgE, blood Eos and Neu, FEV1%predicted, and age are poor surrogates, separately and combined, for accurately predicting sputum eosinophils and neutrophils. PMID:23706399

  8. Analytical description of changes in the magnetic states of chromium-nickel steel under uniaxial elastic deformation

    NASA Astrophysics Data System (ADS)

    Gorkunov, E. S.; Yakushenko, E. I.; Zadvorkin, S. M.; Mushnikov, A. N.

    2017-12-01

    Dependences of magnetization and magnetic permeability of the 15KhN4D structural steel on the value of uniaxial stresses and magnetic field strength are obtained. A polynomial approximation fairly accurately describing the observed changes is proposed on the basis of experimental data.

  9. Coupling geostatistics to detailed reservoir description allows better visualization and more accurate characterization/simulation of turbidite reservoirs: Elk Hills oil field, California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allan, M.E.; Wilson, M.L.; Wightman, J.

    1996-12-31

    The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity & permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based onmore » marker correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic & petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.« less

  10. Coupling geostatistics to detailed reservoir description allows better visualization and more accurate characterization/simulation of turbidite reservoirs: Elk Hills oil field, California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allan, M.E.; Wilson, M.L.; Wightman, J.

    1996-01-01

    The Elk Hills giant oilfield, located in the southern San Joaquin Valley of California, has produced 1.1 billion barrels of oil from Miocene and shallow Pliocene reservoirs. 65% of the current 64,000 BOPD production is from the pressure-supported, deeper Miocene turbidite sands. In the turbidite sands of the 31 S structure, large porosity permeability variations in the Main Body B and Western 31 S sands cause problems with the efficiency of the waterflooding. These variations have now been quantified and visualized using geostatistics. The end result is a more detailed reservoir characterization for simulation. Traditional reservoir descriptions based on markermore » correlations, cross-sections and mapping do not provide enough detail to capture the short-scale stratigraphic heterogeneity needed for adequate reservoir simulation. These deterministic descriptions are inadequate to tie with production data as the thinly bedded sand/shale sequences blur into a falsely homogenous picture. By studying the variability of the geologic petrophysical data vertically within each wellbore and spatially from well to well, a geostatistical reservoir description has been developed. It captures the natural variability of the sands and shales that was lacking from earlier work. These geostatistical studies allow the geologic and petrophysical characteristics to be considered in a probabilistic model. The end-product is a reservoir description that captures the variability of the reservoir sequences and can be used as a more realistic starting point for history matching and reservoir simulation.« less

  11. Efficient alignment-free DNA barcode analytics.

    PubMed

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-11-10

    In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.

  12. Efficient alignment-free DNA barcode analytics

    PubMed Central

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-01-01

    Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305

  13. Analytical Validation of a Highly Quantitative, Sensitive, Accurate, and Reproducible Assay (HERmark) for the Measurement of HER2 Total Protein and HER2 Homodimers in FFPE Breast Cancer Tumor Specimens.

    PubMed

    Larson, Jeffrey S; Goodman, Laurie J; Tan, Yuping; Defazio-Eli, Lisa; Paquet, Agnes C; Cook, Jennifer W; Rivera, Amber; Frankson, Kristi; Bose, Jolly; Chen, Lili; Cheung, Judy; Shi, Yining; Irwin, Sarah; Kiss, Linda D B; Huang, Weidong; Utter, Shannon; Sherwood, Thomas; Bates, Michael; Weidler, Jodi; Parry, Gordon; Winslow, John; Petropoulos, Christos J; Whitcomb, Jeannette M

    2010-06-28

    We report here the results of the analytical validation of assays that measure HER2 total protein (H2T) and HER2 homodimer (H2D) expression in Formalin Fixed Paraffin Embedded (FFPE) breast cancer tumors as well as cell line controls. The assays are based on the VeraTag technology platform and are commercially available through a central CAP-accredited clinical reference laboratory. The accuracy of H2T measurements spans a broad dynamic range (2-3 logs) as evaluated by comparison with cross-validating technologies. The measurement of H2T expression demonstrates a sensitivity that is approximately 7-10 times greater than conventional immunohistochemistry (IHC) (HercepTest). The HERmark assay is a quantitative assay that sensitively and reproducibly measures continuous H2T and H2D protein expression levels and therefore may have the potential to stratify patients more accurately with respect to response to HER2-targeted therapies than current methods which rely on semiquantitative protein measurements (IHC) or on indirect assessments of gene amplification (FISH).

  14. Analytical formulation of directly modulated OOFDM signals transmitted over an IM/DD dispersive link.

    PubMed

    Sánchez, C; Ortega, B; Wei, J L; Tang, J; Capmany, J

    2013-03-25

    We provide an analytical study on the propagation effects of a directly modulated OOFDM signal through a dispersive fiber and subsequent photo-detection. The analysis includes the effects of the laser operation point and the interplay between chromatic dispersion and laser chirp. The final expression allows to understand the physics behind the transmission of a multi-carrier signal in the presence of residual frequency modulation and the description of the induced intermodulation distortion gives us a detailed insight into the diferent intermodulation products which impair the recovered signal at the receiver-end side. Numerical comparisons between transmission simulations results and those provided by evaluating the expression obtained are carried out for different laser operation points. Results obtained by changing the fiber length, laser parameters and using single mode fiber with negative and positive dispersion are calculated in order to demonstrate the validity and versatility of the theory provided in this paper. Therefore, a novel analytical formulation is presented as a versatile tool for the description and study of IM/DD OOFDM systems with variable design parameters.

  15. Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groeneboom, N. E.; Dahle, H., E-mail: nicolaag@astro.uio.no

    2014-03-10

    We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images thatmore » can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.« less

  16. Analytical and Clinical Validation of a Digital Sequencing Panel for Quantitative, Highly Accurate Evaluation of Cell-Free Circulating Tumor DNA

    PubMed Central

    Zill, Oliver A.; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A.; Divers, Stephen G.; Hoon, Dave S. B.; Kopetz, E. Scott; Lee, Jeeyun; Nikolinakos, Petros G.; Baca, Arthur M.; Kermani, Bahram G.; Eltoukhy, Helmy; Talasaz, AmirAli

    2015-01-01

    Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital SequencingTM is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient’s cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing. PMID:26474073

  17. A Multiscale Red Blood Cell Model with Accurate Mechanics, Rheology, and Dynamics

    PubMed Central

    Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em

    2010-01-01

    Abstract Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. PMID:20483330

  18. Descriptions and identifications of strangers by youth and adult eyewitnesses.

    PubMed

    Pozzulo, Joanna D; Warren, Kelly L

    2003-04-01

    Two studies varying target gender and mode of target exposure were conducted to compare the quantity, nature, and accuracy of free recall person descriptions provided by youths and adults. In addition, the relation among age, identification accuracy, and number of descriptors reported was considered. Youths (10-14 years) reported fewer descriptors than adults. Exterior facial descriptors (e.g., hair items) were predominant and accurately reported by youths and adults. Accuracy was consistently problematic for youths when reporting body descriptors (e.g., height, weight) and interior facial features. Youths reported a similar number of descriptors when making accurate versus inaccurate identification decisions. This pattern also was consistent for adults. With target-absent lineups, the difference in the number of descriptors reported between adults and youths was greater when making a false positive versus correct rejection.

  19. Microemulsification: an approach for analytical determinations.

    PubMed

    Lima, Renato S; Shiroma, Leandro Y; Teixeira, Alvaro V N C; de Toledo, José R; do Couto, Bruno C; de Carvalho, Rogério M; Carrilho, Emanuel; Kubota, Lauro T; Gobbi, Angelo L

    2014-09-16

    water, in turn, the linear range was observed throughout the volume fraction of analyte. The best limits of detection were 0.32% v/v water to ethanol and 0.30% v/v monoethylene glycol to water. Furthermore, the accuracy was highly satisfactory. The natural gas samples provided by the Petrobras exhibited color, particulate material, high ionic strength, and diverse compounds as metals, carboxylic acids, and anions. These samples had a conductivity of up to 2630 μS cm(-1); the conductivity of pure monoethylene glycol was only 0.30 μS cm(-1). Despite such downsides, the method allowed accurate measures bypassing steps such as extraction, preconcentration, and dilution of the sample. In addition, the levels of robustness were promising. This parameter was evaluated by investigating the effect of (i) deviations in volumetric preparation of the dispersions and (ii) changes in temperature over the analyte contents recorded by the method.

  20. Using Learning Analytics to Enhance Student Learning in Online Courses Based on Quality Matters Standards

    ERIC Educational Resources Information Center

    Martin, Florence; Ndoye, Abdou; Wilkins, Patricia

    2016-01-01

    Quality Matters is recognized as a rigorous set of standards that guide the designer or instructor to design quality online courses. We explore how Quality Matters standards guide the identification and analysis of learning analytics data to monitor and improve online learning. Descriptive data were collected for frequency of use, time spent, and…

  1. An accurate metric for the spacetime around rotating neutron stars

    NASA Astrophysics Data System (ADS)

    Pappas, George

    2017-04-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parametrized metric, I.e. a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work, we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parametrized by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parametrization of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a three-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  2. Aretaeus of Cappadocia and the first description of diabetes.

    PubMed

    Laios, Konstantinos; Karamanou, Marianna; Saridaki, Zenia; Androutsos, George

    2012-01-01

    The name Aretaeus of Cappadocia has been linked with diabetes more than that of any other physician of antiquity, his texts forming a sophisticated synthesis of the previous knowledge on this disease copiously supplemented by his own observations. Gifted with a unique faculty for observing pathologic phenomena, he was able to elaborate upon earlier texts enriching them with his own original findings and numerous thoughtful reflections. Among the many diseases he dealt with, Aretaeus has bequeathed to us an outstandingly vivid and accurate description of diabetes.

  3. CAMELOT: Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox

    NASA Astrophysics Data System (ADS)

    Di Carlo, Marilena; Romero Martin, Juan Manuel; Vasile, Massimiliano

    2018-03-01

    Computational-Analytical Multi-fidElity Low-thrust Optimisation Toolbox (CAMELOT) is a toolbox for the fast preliminary design and optimisation of low-thrust trajectories. It solves highly complex combinatorial problems to plan multi-target missions characterised by long spirals including different perturbations. To do so, CAMELOT implements a novel multi-fidelity approach combining analytical surrogate modelling and accurate computational estimations of the mission cost. Decisions are then made using two optimisation engines included in the toolbox, a single-objective global optimiser, and a combinatorial optimisation algorithm. CAMELOT has been applied to a variety of case studies: from the design of interplanetary trajectories to the optimal de-orbiting of space debris and from the deployment of constellations to on-orbit servicing. In this paper, the main elements of CAMELOT are described and two examples, solved using the toolbox, are presented.

  4. Process-Hardened, Multi-Analyte Sensor for Characterizing Rocket Plume Constituents

    NASA Technical Reports Server (NTRS)

    Goswami, Kisholoy

    2011-01-01

    A multi-analyte sensor was developed that enables simultaneous detection of rocket engine combustion-product molecules in a launch-vehicle ground test stand. The sensor was developed using a pin-printing method by incorporating multiple sensor elements on a single chip. It demonstrated accurate and sensitive detection of analytes such as carbon dioxide, carbon monoxide, kerosene, isopropanol, and ethylene from a single measurement. The use of pin-printing technology enables high-volume fabrication of the sensor chip, which will ultimately eliminate the need for individual sensor calibration since many identical sensors are made in one batch. Tests were performed using a single-sensor chip attached to a fiber-optic bundle. The use of a fiber bundle allows placement of the opto-electronic readout device at a place remote from the test stand. The sensors are rugged for operation in harsh environments.

  5. Resumenes Analiticos en Education del 0001 al 0230 (Analytic Resumes in Education, from 0001 to 0230).

    ERIC Educational Resources Information Center

    Scott, Patrick B., Ed.

    1991-01-01

    REDUC is a cooperative network of some 23 associated centers in 17 Latin American and Caribbean countries. The REDUC coordinating center is located in Santiago, Chile. REDUC produces a bibliographic database containing analytical summaries (approximately 800 items annually) of the most important research studies and project descriptions in the…

  6. Analytical model of ground-state lasing phenomenon in broadband semiconductor quantum dot lasers

    NASA Astrophysics Data System (ADS)

    Korenev, Vladimir V.; Savelyev, Artem V.; Zhukov, Alexey E.; Omelchenko, Alexander V.; Maximov, Mikhail V.

    2013-05-01

    We introduce an analytical approach to the description of broadband lasing spectra of semiconductor quantum dot lasers emitting via ground-state optical transitions of quantum dots. The explicit analytical expressions describing the shape and the width of lasing spectra as well as their temperature and injection current dependences are obtained in the case of low homogeneous broadening. It is shown that in this case these dependences are determined by only two dimensionless parameters, which are the dispersion of the distribution of QDs over the energy normalized to the temperature and loss-to-maximum gain ratio. The possibility of optimization of laser's active region size and structure by using the intentionally introduced disorder is also carefully considered.

  7. Towards First Principles-Based Prediction of Highly Accurate Electrochemical Pourbaix Diagrams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Zhenhua; Chan, Maria K. Y.; Zhao, Zhi-Jian

    2015-08-13

    Electrochemical potential/pH (Pourbaix) diagrams underpin many aqueous electrochemical processes and are central to the identification of stable phases of metals for processes ranging from electrocatalysis to corrosion. Even though standard DFT calculations are potentially powerful tools for the prediction of such diagrams, inherent errors in the description of transition metal (hydroxy)oxides, together with neglect of van der Waals interactions, have limited the reliability of such predictions for even the simplest pure metal bulk compounds, and corresponding predictions for more complex alloy or surface structures are even more challenging. In the present work, through synergistic use of a Hubbard U correction,more » a state-of-the-art dispersion correction, and a water-based bulk reference state for the calculations, these errors are systematically corrected. The approach describes the weak binding that occurs between hydroxyl-containing functional groups in certain compounds in Pourbaix diagrams, corrects for self-interaction errors in transition metal compounds, and reduces residual errors on oxygen atoms by preserving a consistent oxidation state between the reference state, water, and the relevant bulk phases. The strong performance is illustrated on a series of bulk transition metal (Mn, Fe, Co and Ni) hydroxides, oxyhydroxides, binary, and ternary oxides, where the corresponding thermodynamics of redox and (de)hydration are described with standard errors of 0.04 eV per (reaction) formula unit. The approach further preserves accurate descriptions of the overall thermodynamics of electrochemically-relevant bulk reactions, such as water formation, which is an essential condition for facilitating accurate analysis of reaction energies for electrochemical processes on surfaces. The overall generality and transferability of the scheme suggests that it may find useful application in the construction of a broad array of electrochemical phase diagrams, including

  8. Space Trajectories Error Analysis (STEAP) Programs. Volume 1: Analytic manual, update

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Manual revisions are presented for the modified and expanded STEAP series. The STEAP 2 is composed of three independent but related programs: NOMAL for the generation of n-body nominal trajectories performing a number of deterministic guidance events; ERRAN for the linear error analysis and generalized covariance analysis along specific targeted trajectories; and SIMUL for testing the mathematical models used in the navigation and guidance process. The analytic manual provides general problem description, formulation, and solution and the detailed analysis of subroutines. The programmers' manual gives descriptions of the overall structure of the programs as well as the computational flow and analysis of the individual subroutines. The user's manual provides information on the input and output quantities of the programs. These are updates to N69-36472 and N69-36473.

  9. A general analytical platform and strategy in search for illegal drugs.

    PubMed

    Johansson, Monika; Fransson, Dick; Rundlöf, Torgny; Huynh, Ngoc-Hang; Arvidsson, Torbjörn

    2014-11-01

    An effective screening procedure to identify and quantify active pharmaceutical substances in suspected illegal medicinal products is described. The analytical platform, consisting of accurate mass determination with liquid chromatography time-of-flight mass spectrometry (LC-QTOF-MS) in combination with nuclear magnetic resonance (NMR) spectroscopy provides an excellent analytical tool to screen for unknowns in medicinal products, food supplements and herbal formulations. This analytical approach has been successfully applied to analyze thousands of samples. The general screening method usually starts with a methanol extraction of tablets/capsules followed by liquid chromatographic separation on a Halo Phenyl-Hexyl column (2.7μm; 100mm×2.1mm) using an acetonitrile/0.1% formic acid gradient as eluent. The accurate mass of peaks of interest was recorded and a search made against an in-house database containing approximately 4200 substances, mostly pharmaceutical compounds. The search could be general or tailored against different classes of compounds. Hits were confirmed by analyzing a reference substance and/or by NMR. Quantification was normally performed with quantitative NMR (qNMR) spectroscopy. Applications for weight-loss substances like sibutramine and orlistat, sexual potency enhancement (PDE-5 inhibitors), and analgesic drugs are presented in this study. We have also identified prostaglandin analogues in eyelash growth serum, exemplified by isopropyl cloprostenate and bimatoprost. For creams and ointments, matrix solid-phase dispersion (MSPD) was found to give a clean extracts with high recovery prior to LC-MS analyses. The structural elucidation of cetilistat, a new weight-loss substance recently found in illegal medicines purchased over the Internet, is also presented. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Technical description of endoscopic ultrasonography with fine-needle aspiration for the staging of lung cancer.

    PubMed

    Kramer, Henk; van Putten, John W G; Douma, W Rob; Smidt, Alie A; van Dullemen, Hendrik M; Groen, Harry J M

    2005-02-01

    Endoscopic ultrasonography (EUS) is a novel method for staging of the mediastinum in lung cancer patients. The recent development of linear scanners enables safe and accurate fine-needle aspiration (FNA) of mediastinal and upper abdominal structures under real-time ultrasound guidance. However, various methods and equipment for mediastinal EUS-FNA are being used throughout the world, and a detailed description of the procedures is lacking. A thorough description of linear EUS-FNA is needed. A step-by-step description of the linear EUS-FNA procedure as performed in our hospital will be provided. Ultrasonographic landmarks will be shown on images. The procedure will be related to published literature, with a systematic literature search. EUS-FNA is an outpatient procedure under conscious sedation. The typical linear EUS-FNA procedure starts with examination of the retroperitoneal area. After this, systematic scanning of the mediastinum is performed at intervals of 1-2cm. Abnormalities are noted, and FNA of the abnormalities can be performed. Specimens are assessed for cellularity on-site. The entire procedure takes 45-60 min. EUS-FNA is minimally invasive, accurate, and fast. Anatomical areas can be reached that are inaccessible for cervical mediastinoscopy. EUS-FNA is useful for the staging of lung cancer or the assessment and diagnosis of abnormalities in the posterior mediastinum.

  11. Web Analytics

    EPA Pesticide Factsheets

    EPA’s Web Analytics Program collects, analyzes, and provides reports on traffic, quality assurance, and customer satisfaction metrics for EPA’s website. The program uses a variety of analytics tools, including Google Analytics and CrazyEgg.

  12. Hybrid Analytical and Data-Driven Modeling for Feed-Forward Robot Control †

    PubMed Central

    Reinhart, René Felix; Shareef, Zeeshan; Steil, Jochen Jakob

    2017-01-01

    Feed-forward model-based control relies on models of the controlled plant, e.g., in robotics on accurate knowledge of manipulator kinematics or dynamics. However, mechanical and analytical models do not capture all aspects of a plant’s intrinsic properties and there remain unmodeled dynamics due to varying parameters, unmodeled friction or soft materials. In this context, machine learning is an alternative suitable technique to extract non-linear plant models from data. However, fully data-based models suffer from inaccuracies as well and are inefficient if they include learning of well known analytical models. This paper thus argues that feed-forward control based on hybrid models comprising an analytical model and a learned error model can significantly improve modeling accuracy. Hybrid modeling here serves the purpose to combine the best of the two modeling worlds. The hybrid modeling methodology is described and the approach is demonstrated for two typical problems in robotics, i.e., inverse kinematics control and computed torque control. The former is performed for a redundant soft robot and the latter for a rigid industrial robot with redundant degrees of freedom, where a complete analytical model is not available for any of the platforms. PMID:28208697

  13. Water Lone Pair Delocalization in Classical and Quantum Descriptions of the Hydration of Model Ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Remsing, Richard C.; Duignan, Timothy T.; Baer, Marcel D.

    Understanding the nature of ionic hydration at a fundamental level has eluded scientists despite intense interest for nearly a century. In particular, the microscopic origins of the asymmetry of ion solvation thermodynamics with respect to the sign of the ionic charge remains a mystery. Here, we determine the response of accurate quantum mechanical water models to strong nanoscale solvation forces arising from excluded volumes and ionic electrostatic fields. This is compared to the predictions of two important limiting classes of classical models of water with fixed point changes, differing in their treatment of "lone-pair" electrons. Using the quantum water modelmore » as our standard of accuracy, we find that a single fixed classical treatment of lone pair electrons cannot accurately describe solvation of both apolar and cationic solutes, underlining the need for a more flexible description of local electronic effects in solvation processes. However, we explicitly show that all water models studied respond to weak long-ranged electrostatic perturbations in a manner that follows macroscopic dielectric continuum models, as would be expected. We emphasize the importance of these findings in the context of realistic ion models, using density functional theory and empirical models, and discuss the implications of our results for quantitatively accurate reduced descriptions of solvation in dielectric media.« less

  14. Integrated signal probe based aptasensor for dual-analyte detection.

    PubMed

    Xiang, Juan; Pi, Xiaomei; Chen, Xiaoqing; Xiang, Lei; Yang, Minghui; Ren, Hao; Shen, Xiaojuan; Qi, Ning; Deng, Chunyan

    2017-10-15

    For the multi-analyte detection, although the sensitivity has commonly met the practical requirements, the reliability, reproducibility and stability need to be further improved. In this work, two different aptamer probes labeled with redox tags were used as signal probe1 (sP1) and signal probe2 (sP2), which were integrated into one unity DNA architecture to develop the integrated signal probe (ISP). Comparing with the conventional independent signal probes for the simultaneous multi-analyte detection, the proposed ISP was more reproducible and accurate. This can be due to that ISP in one DNA structure can ensure the completely same modification condition and an equal stoichiometric ratio between sP1 and sP2, and furthermore the cross interference between sP1 and sP2 can be successfully prevented by regulating the complementary position of sP1 and sP2. The ISP-based assay system would be a great progress for the dual-analyte detection. Combining with gold nanoparticles (AuNPs) signal amplification, the ISP/AuNPs-based aptasensor for the sensitive dual-analyte detection was explored. Based on DNA structural switching induced by targets binding to aptamer, the simultaneous dual-analyte detection was simply achieved by monitoring the electrochemical responses of methylene blue (MB) and ferrocene (Fc) This proposed detection system possesses such advantages as simplicity in design, easy operation, good reproducibility and accuracy, high sensitivity and selectivity, which indicates the excellent application of this aptasensor in the field of clinical diagnosis or other molecular sensors. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Managing knowledge business intelligence: A cognitive analytic approach

    NASA Astrophysics Data System (ADS)

    Surbakti, Herison; Ta'a, Azman

    2017-10-01

    The purpose of this paper is to identify and analyze integration of Knowledge Management (KM) and Business Intelligence (BI) in order to achieve competitive edge in context of intellectual capital. Methodology includes review of literatures and analyzes the interviews data from managers in corporate sector and models established by different authors. BI technologies have strong association with process of KM for attaining competitive advantage. KM have strong influence from human and social factors and turn them to the most valuable assets with efficient system run under BI tactics and technologies. However, the term of predictive analytics is based on the field of BI. Extracting tacit knowledge is a big challenge to be used as a new source for BI to use in analyzing. The advanced approach of the analytic methods that address the diversity of data corpus - structured and unstructured - required a cognitive approach to provide estimative results and to yield actionable descriptive, predictive and prescriptive results. This is a big challenge nowadays, and this paper aims to elaborate detail in this initial work.

  16. Structural Acoustic Physics Based Modeling of Curved Composite Shells

    DTIC Science & Technology

    2017-09-19

    Results show that the finite element computational models accurately match analytical calculations, and that the composite material studied in this...products. 15. SUBJECT TERMS Finite Element Analysis, Structural Acoustics, Fiber-Reinforced Composites, Physics-Based Modeling 16. SECURITY...2 4 FINITE ELEMENT MODEL DESCRIPTION

  17. Analytical dose modeling for preclinical proton irradiation of millimetric targets.

    PubMed

    Vanstalle, Marie; Constanzo, Julie; Karakaya, Yusuf; Finck, Christian; Rousseau, Marc; Brasse, David

    2018-01-01

    Due to the considerable development of proton radiotherapy, several proton platforms have emerged to irradiate small animals in order to study the biological effectiveness of proton radiation. A dedicated analytical treatment planning tool was developed in this study to accurately calculate the delivered dose given the specific constraints imposed by the small dimensions of the irradiated areas. The treatment planning system (TPS) developed in this study is based on an analytical formulation of the Bragg peak and uses experimental range values of protons. The method was validated after comparison with experimental data from the literature and then compared to Monte Carlo simulations conducted using Geant4. Three examples of treatment planning, performed with phantoms made of water targets and bone-slab insert, were generated with the analytical formulation and Geant4. Each treatment planning was evaluated using dose-volume histograms and gamma index maps. We demonstrate the value of the analytical function for mouse irradiation, which requires a targeting accuracy of 0.1 mm. Using the appropriate database, the analytical modeling limits the errors caused by misestimating the stopping power. For example, 99% of a 1-mm tumor irradiated with a 24-MeV beam receives the prescribed dose. The analytical dose deviations from the prescribed dose remain within the dose tolerances stated by report 62 of the International Commission on Radiation Units and Measurements for all tested configurations. In addition, the gamma index maps show that the highly constrained targeting accuracy of 0.1 mm for mouse irradiation leads to a significant disagreement between Geant4 and the reference. This simulated treatment planning is nevertheless compatible with a targeting accuracy exceeding 0.2 mm, corresponding to rat and rabbit irradiations. Good dose accuracy for millimetric tumors is achieved with the analytical calculation used in this work. These volume sizes are typical in mouse

  18. Universal behaviour of interoccurrence times between losses in financial markets: An analytical description

    NASA Astrophysics Data System (ADS)

    Ludescher, J.; Tsallis, C.; Bunde, A.

    2011-09-01

    We consider 16 representative financial records (stocks, indices, commodities, and exchange rates) and study the distribution PQ(r) of the interoccurrence times r between daily losses below negative thresholds -Q, for fixed mean interoccurrence time RQ. We find that in all cases, PQ(r) follows the form PQ(r)~1/[(1+(q- 1)βr]1/(q-1), where β and q are universal constants that depend only on RQ, but not on a specific asset. While β depends only slightly on RQ, the q-value increases logarithmically with RQ, q=1+q0 ln(RQ/2), such that for RQ→2, PQ(r) approaches a simple exponential, PQ(r)cong2-r. The fact that PQ does not scale with RQ is due to the multifractality of the financial markets. The analytic form of PQ allows also to estimate both the risk function and the Value-at-Risk, and thus to improve the estimation of the financial risk.

  19. Accurate and general treatment of electrostatic interaction in Hamiltonian adaptive resolution simulations

    NASA Astrophysics Data System (ADS)

    Heidari, M.; Cortes-Huerto, R.; Donadio, D.; Potestio, R.

    2016-10-01

    In adaptive resolution simulations the same system is concurrently modeled with different resolution in different subdomains of the simulation box, thereby enabling an accurate description in a small but relevant region, while the rest is treated with a computationally parsimonious model. In this framework, electrostatic interaction, whose accurate treatment is a crucial aspect in the realistic modeling of soft matter and biological systems, represents a particularly acute problem due to the intrinsic long-range nature of Coulomb potential. In the present work we propose and validate the usage of a short-range modification of Coulomb potential, the Damped shifted force (DSF) model, in the context of the Hamiltonian adaptive resolution simulation (H-AdResS) scheme. This approach, which is here validated on bulk water, ensures a reliable reproduction of the structural and dynamical properties of the liquid, and enables a seamless embedding in the H-AdResS framework. The resulting dual-resolution setup is implemented in the LAMMPS simulation package, and its customized version employed in the present work is made publicly available.

  20. Analytical Validation of a Highly Quantitative, Sensitive, Accurate, and Reproducible Assay (HERmark®) for the Measurement of HER2 Total Protein and HER2 Homodimers in FFPE Breast Cancer Tumor Specimens

    PubMed Central

    Larson, Jeffrey S.; Goodman, Laurie J.; Tan, Yuping; Defazio-Eli, Lisa; Paquet, Agnes C.; Cook, Jennifer W.; Rivera, Amber; Frankson, Kristi; Bose, Jolly; Chen, Lili; Cheung, Judy; Shi, Yining; Irwin, Sarah; Kiss, Linda D. B.; Huang, Weidong; Utter, Shannon; Sherwood, Thomas; Bates, Michael; Weidler, Jodi; Parry, Gordon; Winslow, John; Petropoulos, Christos J.; Whitcomb, Jeannette M.

    2010-01-01

    We report here the results of the analytical validation of assays that measure HER2 total protein (H2T) and HER2 homodimer (H2D) expression in Formalin Fixed Paraffin Embedded (FFPE) breast cancer tumors as well as cell line controls. The assays are based on the VeraTag technology platform and are commercially available through a central CAP-accredited clinical reference laboratory. The accuracy of H2T measurements spans a broad dynamic range (2-3 logs) as evaluated by comparison with cross-validating technologies. The measurement of H2T expression demonstrates a sensitivity that is approximately 7–10 times greater than conventional immunohistochemistry (IHC) (HercepTest). The HERmark assay is a quantitative assay that sensitively and reproducibly measures continuous H2T and H2D protein expression levels and therefore may have the potential to stratify patients more accurately with respect to response to HER2-targeted therapies than current methods which rely on semiquantitative protein measurements (IHC) or on indirect assessments of gene amplification (FISH). PMID:21151530

  1. Generalized Stoner-Wohlfarth model accurately describing the switching processes in pseudo-single ferromagnetic particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cimpoesu, Dorin, E-mail: cdorin@uaic.ro; Stoleriu, Laurentiu; Stancu, Alexandru

    2013-12-14

    We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.

  2. A multi-subject evaluation of uncertainty in anatomical landmark location on shoulder kinematic description.

    PubMed

    Langenderfer, Joseph E; Rullkoetter, Paul J; Mell, Amy G; Laz, Peter J

    2009-04-01

    An accurate assessment of shoulder kinematics is useful for understanding healthy normal and pathological mechanics. Small variability in identifying and locating anatomical landmarks (ALs) has potential to affect reported shoulder kinematics. The objectives of this study were to quantify the effect of landmark location variability on scapular and humeral kinematic descriptions for multiple subjects using probabilistic analysis methods, and to evaluate the consistency in results across multiple subjects. Data from 11 healthy subjects performing humeral elevation in the scapular plane were used to calculate Euler angles describing humeral and scapular kinematics. Probabilistic analyses were performed for each subject to simulate uncertainty in the locations of 13 upper-extremity ALs. For standard deviations of 4 mm in landmark location, the analysis predicted Euler angle envelopes between the 1 and 99 percentile bounds of up to 16.6 degrees . While absolute kinematics varied with the subject, the average 1-99% kinematic ranges for the motion were consistent across subjects and sensitivity factors showed no statistically significant differences between subjects. The description of humeral kinematics was most sensitive to the location of landmarks on the thorax, while landmarks on the scapula had the greatest effect on the description of scapular elevation. The findings of this study can provide a better understanding of kinematic variability, which can aid in making accurate clinical diagnoses and refining kinematic measurement techniques.

  3. Fall Velocities of Hydrometeors in the Atmosphere: Refinements to a Continuous Analytical Power Law.

    NASA Astrophysics Data System (ADS)

    Khvorostyanov, Vitaly I.; Curry, Judith A.

    2005-12-01

    This paper extends the previous research of the authors on the unified representation of fall velocities for both liquid and crystalline particles as a power law over the entire size range of hydrometeors observed in the atmosphere. The power-law coefficients are determined as continuous analytical functions of the Best or Reynolds number or of the particle size. Here, analytical expressions are formulated for the turbulent corrections to the Reynolds number and to the power-law coefficients that describe the continuous transition from the laminar to the turbulent flow around a falling particle. A simple analytical expression is found for the correction of fall velocities for temperature and pressure. These expressions and the resulting fall velocities are compared with observations and other calculations for a range of ice crystal habits and sizes. This approach provides a continuous analytical power-law description of the terminal velocities of liquid and crystalline hydrometeors with sufficiently high accuracy and can be directly used in bin-resolving models or incorporated into parameterizations for cloud- and large-scale models and remote sensing techniques.

  4. Accurate determination of Brillouin frequency based on cross recurrence plot analysis in Brillouin distributed fiber sensor

    NASA Astrophysics Data System (ADS)

    Haneef, Shahna M.; Srijith, K.; Venkitesh, D.; Srinivasan, B.

    2017-04-01

    We propose and demonstrate the use of cross recurrence plot analysis (CRPA) to accurately determine the Brillouin shift due to strain and temperature in a Brillouin distributed fiber sensor. This signal processing technique, which is implemented in Brillouin sensors for the first time relies on apriori data i.e, the lineshape of the Brillouin gain spectrum and its similarity with the spectral features measured at different locations along the fiber. Analytical and experimental investigation of the proposed scheme is presented in this paper.

  5. Analytic materials

    PubMed Central

    2016-01-01

    The theory of inhomogeneous analytic materials is developed. These are materials where the coefficients entering the equations involve analytic functions. Three types of analytic materials are identified. The first two types involve an integer p. If p takes its maximum value, then we have a complete analytic material. Otherwise, it is incomplete analytic material of rank p. For two-dimensional materials, further progress can be made in the identification of analytic materials by using the well-known fact that a 90° rotation applied to a divergence-free field in a simply connected domain yields a curl-free field, and this can then be expressed as the gradient of a potential. Other exact results for the fields in inhomogeneous media are reviewed. Also reviewed is the subject of metamaterials, as these materials provide a way of realizing desirable coefficients in the equations. PMID:27956882

  6. Analytical description of generation of the residual current density in the plasma produced by a few-cycle laser pulse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silaev, A. A., E-mail: silaev@appl.sci-nnov.ru; Vvedenskii, N. V., E-mail: vved@appl.sci-nnov.ru; University of Nizhny Novgorod, Nizhny Novgorod 603950

    2015-05-15

    When a gas is ionized by a few-cycle laser pulse, some residual current density (RCD) of free electrons remains in the produced plasma after the passage of the laser pulse. This quasi-dc RCD is an initial impetus to plasma polarization and excitation of the plasma oscillations which can radiate terahertz (THz) waves. In this work, the analytical model for calculation of RCD excited by a few-cycle laser pulse is developed for the first time. The dependences of the RCD on the carrier-envelope phase (CEP), wavelength, duration, and intensity of the laser pulse are derived. It is shown that maximum RCDmore » corresponding to optimal CEP increases with the laser pulse wavelength, which indicates the prospects of using mid-infrared few-cycle laser pulses in the schemes of generation of high-power THz pulses. Analytical formulas for optimal pulse intensity and maximum efficiency of excitation of the RCD are obtained. Basing on numerical solution of the 3D time-dependent Schrödinger equation for hydrogen atoms, RCD dependence on CEP is calculated in a wide range of wavelengths. High accuracy of analytical formulas is demonstrated at the laser pulse parameters which correspond to the tunneling regime of ionization.« less

  7. Experimental/analytical approach to understanding mistuning in a transonic wind tunnel compressor

    NASA Technical Reports Server (NTRS)

    Kaiser, Teri; Hansen, Reed S.; Nguyen, Nhan; Hampton, Roy W.; Muzzio, Doug; Chargin, Mladen K.; Guist, Roy; Hamm, Ken; Walker, Len

    1994-01-01

    This paper will briefly set forth some of the basic tenets of mistuned rotating bladed-disk assemblies. The experience with an existing three stage compressor in a transonic wind tunnel will be documented. The manner in which the theoretical properties manifest themselves in this non-ideal compressor will be described. A description of mistuning behaviors that can and cannot be accurately substantiated will be discussed.

  8. Cogeneration Technology Alternatives Study (CTAS). Volume 2: Analytical approach

    NASA Technical Reports Server (NTRS)

    Gerlaugh, H. E.; Hall, E. W.; Brown, D. H.; Priestley, R. R.; Knightly, W. F.

    1980-01-01

    The use of various advanced energy conversion systems were compared with each other and with current technology systems for their savings in fuel energy, costs, and emissions in individual plants and on a national level. The ground rules established by NASA and assumptions made by the General Electric Company in performing this cogeneration technology alternatives study are presented. The analytical methodology employed is described in detail and is illustrated with numerical examples together with a description of the computer program used in calculating over 7000 energy conversion system-industrial process applications. For Vol. 1, see 80N24797.

  9. An Accurate Absorption-Based Net Primary Production Model for the Global Ocean

    NASA Astrophysics Data System (ADS)

    Silsbe, G.; Westberry, T. K.; Behrenfeld, M. J.; Halsey, K.; Milligan, A.

    2016-02-01

    As a vital living link in the global carbon cycle, understanding how net primary production (NPP) varies through space, time, and across climatic oscillations (e.g. ENSO) is a key objective in oceanographic research. The continual improvement of ocean observing satellites and data analytics now present greater opportunities for advanced understanding and characterization of the factors regulating NPP. In particular, the emergence of spectral inversion algorithms now permits accurate retrievals of the phytoplankton absorption coefficient (aΦ) from space. As NPP is the efficiency in which absorbed energy is converted into carbon biomass, aΦ measurements circumvents chlorophyll-based empirical approaches by permitting direct and accurate measurements of phytoplankton energy absorption. It has long been recognized, and perhaps underappreciated, that NPP and phytoplankton growth rates display muted variability when normalized to aΦ rather than chlorophyll. Here we present a novel absorption-based NPP model that parameterizes the underlying physiological mechanisms behind this muted variability, and apply this physiological model to the global ocean. Through a comparison against field data from the Hawaii and Bermuda Ocean Time Series, we demonstrate how this approach yields more accurate NPP measurements than other published NPP models. By normalizing NPP to satellite estimates of phytoplankton carbon biomass, this presentation also explores the seasonality of phytoplankton growth rates across several oceanic regions. Finally, we discuss how future advances in remote-sensing (e.g. hyperspectral satellites, LIDAR, autonomous profilers) can be exploited to further improve absorption-based NPP models.

  10. Lessons learned from the study of masturbation and its comorbidity with psychiatric disorders in children: The first analytic study

    PubMed Central

    Tashakori, Ashraf; Safavi, Atefeh; Neamatpour, Sorour

    2017-01-01

    Background The main source of information about children’s masturbation is more on the basis of case reports. Due to the lack of consistent and accurate information. Objective This study aimed to determine prevalence and underlying factors of masturbation and its comorbidity with psychiatric disorders in children. Methods In this descriptive-analytical study, among the children referred to the Pediatrics Clinic of Psychiatric Ward, Golestan Hospital, Ahvaz, Southwest Iran, 98 children were selected by convenience sampling in 2014. Disorders were diagnosed by clinical interview based on the fourth edition of the Diagnostic and Statistical Manual for Psychiatric Disorders (DSM-IV) and the Child Symptom Inventory-4 (CSI-4). We also used a questionnaire, containing demographic information about the patient and their family and also other data. Data was analyzed using descriptive statistics and chi-square test with SPSS software version 16. Results Of the children who participated in this study (most of whom were boys), 31.6% suffered from masturbation. The phobias (p=0.002), separation anxiety disorder (p=0.044), generalized anxiety disorder (p=0.037), motor tics (p=0.033), stress disorder (p=0.005), oppositional defiant disorder (p=0.044), thumb sucking (p=0.000) and conduct disorder (p=0.001) were associated with masturbation. Conclusion Masturbation was common in children referred to psychiatric clinic, and may be more associated with oppositional defiant disorder, or conduct disorder, some anxiety disorders, motor tics and other stereotypical behavior. Authors recommended more probing for psychiatric disorders in children with unusual sexual behavior. PMID:28607641

  11. An Extrapolation of a Radical Equation More Accurately Predicts Shelf Life of Frozen Biological Matrices.

    PubMed

    De Vore, Karl W; Fatahi, Nadia M; Sass, John E

    2016-08-01

    Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.

  12. Non-Schwarzschild black-hole metric in four dimensional higher derivative gravity: Analytical approximation

    NASA Astrophysics Data System (ADS)

    Kokkotas, K. D.; Konoplya, R. A.; Zhidenko, A.

    2017-09-01

    Higher derivative extensions of Einstein gravity are important within the string theory approach to gravity and as alternative and effective theories of gravity. H. Lü, A. Perkins, C. Pope, and K. Stelle [Phys. Rev. Lett. 114, 171601 (2015), 10.1103/PhysRevLett.114.171601] found a numerical solution describing a spherically symmetric non-Schwarzschild asymptotically flat black hole in Einstein gravity with added higher derivative terms. Using the general and quickly convergent parametrization in terms of the continued fractions, we represent this numerical solution in the analytical form, which is accurate not only near the event horizon or far from the black hole, but in the whole space. Thereby, the obtained analytical form of the metric allows one to study easily all the further properties of the black hole, such as thermodynamics, Hawking radiation, particle motion, accretion, perturbations, stability, quasinormal spectrum, etc. Thus, the found analytical approximate representation can serve in the same way as an exact solution.

  13. Generalized analytic solutions and response characteristics of magnetotelluric fields on anisotropic infinite faults

    NASA Astrophysics Data System (ADS)

    Bing, Xue; Yicai, Ji

    2018-06-01

    In order to understand directly and analyze accurately the detected magnetotelluric (MT) data on anisotropic infinite faults, two-dimensional partial differential equations of MT fields are used to establish a model of anisotropic infinite faults using the Fourier transform method. A multi-fault model is developed to expand the one-fault model. The transverse electric mode and transverse magnetic mode analytic solutions are derived using two-infinite-fault models. The infinite integral terms of the quasi-analytic solutions are discussed. The dual-fault model is computed using the finite element method to verify the correctness of the solutions. The MT responses of isotropic and anisotropic media are calculated to analyze the response functions by different anisotropic conductivity structures. The thickness and conductivity of the media, influencing MT responses, are discussed. The analytic principles are also given. The analysis results are significant to how MT responses are perceived and to the data interpretation of the complex anisotropic infinite faults.

  14. Systems Biology Graphical Notation: Process Description language Level 1 Version 1.3.

    PubMed

    Moodie, Stuart; Le Novère, Nicolas; Demir, Emek; Mi, Huaiyu; Villéger, Alice

    2015-09-04

    The Systems Biological Graphical Notation (SBGN) is an international community effort for standardized graphical representations of biological pathways and networks. The goal of SBGN is to provide unambiguous pathway and network maps for readers with different scientific backgrounds as well as to support efficient and accurate exchange of biological knowledge between different research communities, industry, and other players in systems biology. Three SBGN languages, Process Description (PD), Entity Relationship (ER) and Activity Flow (AF), allow for the representation of different aspects of biological and biochemical systems at different levels of detail. The SBGN Process Description language represents biological entities and processes between these entities within a network. SBGN PD focuses on the mechanistic description and temporal dependencies of biological interactions and transformations. The nodes (elements) are split into entity nodes describing, e.g., metabolites, proteins, genes and complexes, and process nodes describing, e.g., reactions and associations. The edges (connections) provide descriptions of relationships (or influences) between the nodes, such as consumption, production, stimulation and inhibition. Among all three languages of SBGN, PD is the closest to metabolic and regulatory pathways in biological literature and textbooks, but its well-defined semantics offer a superior precision in expressing biological knowledge.

  15. Fast and accurate Voronoi density gridding from Lagrangian hydrodynamics data

    NASA Astrophysics Data System (ADS)

    Petkova, Maya A.; Laibe, Guillaume; Bonnell, Ian A.

    2018-01-01

    Voronoi grids have been successfully used to represent density structures of gas in astronomical hydrodynamics simulations. While some codes are explicitly built around using a Voronoi grid, others, such as Smoothed Particle Hydrodynamics (SPH), use particle-based representations and can benefit from constructing a Voronoi grid for post-processing their output. So far, calculating the density of each Voronoi cell from SPH data has been done numerically, which is both slow and potentially inaccurate. This paper proposes an alternative analytic method, which is fast and accurate. We derive an expression for the integral of a cubic spline kernel over the volume of a Voronoi cell and link it to the density of the cell. Mass conservation is ensured rigorously by the procedure. The method can be applied more broadly to integrate a spherically symmetric polynomial function over the volume of a random polyhedron.

  16. Recent developments and future trends in solid phase microextraction techniques towards green analytical chemistry.

    PubMed

    Spietelun, Agata; Marcinkowski, Łukasz; de la Guardia, Miguel; Namieśnik, Jacek

    2013-12-20

    Solid phase microextraction find increasing applications in the sample preparation step before chromatographic determination of analytes in samples with a complex composition. These techniques allow for integrating several operations, such as sample collection, extraction, analyte enrichment above the detection limit of a given measuring instrument and the isolation of analytes from sample matrix. In this work the information about novel methodological and instrumental solutions in relation to different variants of solid phase extraction techniques, solid-phase microextraction (SPME), stir bar sorptive extraction (SBSE) and magnetic solid phase extraction (MSPE) is presented, including practical applications of these techniques and a critical discussion about their advantages and disadvantages. The proposed solutions fulfill the requirements resulting from the concept of sustainable development, and specifically from the implementation of green chemistry principles in analytical laboratories. Therefore, particular attention was paid to the description of possible uses of novel, selective stationary phases in extraction techniques, inter alia, polymeric ionic liquids, carbon nanotubes, and silica- and carbon-based sorbents. The methodological solutions, together with properly matched sampling devices for collecting analytes from samples with varying matrix composition, enable us to reduce the number of errors during the sample preparation prior to chromatographic analysis as well as to limit the negative impact of this analytical step on the natural environment and the health of laboratory employees. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Multimedia Analysis plus Visual Analytics = Multimedia Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinchor, Nancy; Thomas, James J.; Wong, Pak C.

    2010-10-01

    Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.

  18. Confined active Brownian particles: theoretical description of propulsion-induced accumulation

    NASA Astrophysics Data System (ADS)

    Das, Shibananda; Gompper, Gerhard; Winkler, Roland G.

    2018-01-01

    The stationary-state distribution function of confined active Brownian particles (ABPs) is analyzed by computer simulations and analytical calculations. We consider a radial harmonic as well as an anharmonic confinement potential. In the simulations, the ABP is propelled with a prescribed velocity along a body-fixed direction, which is changing in a diffusive manner. For the analytical approach, the Cartesian components of the propulsion velocity are assumed to change independently; active Ornstein-Uhlenbeck particle (AOUP). This results in very different velocity distribution functions. The analytical solution of the Fokker-Planck equation for an AOUP in a harmonic potential is presented and a conditional distribution function is provided for the radial particle distribution at a given magnitude of the propulsion velocity. This conditional probability distribution facilitates the description of the coupling of the spatial coordinate and propulsion, which yields activity-induced accumulation of particles. For the anharmonic potential, a probability distribution function is derived within the unified colored noise approximation. The comparison of the simulation results with theoretical predictions yields good agreement for large rotational diffusion coefficients, e.g. due to tumbling, even for large propulsion velocities (Péclet numbers). However, we find significant deviations already for moderate Péclet number, when the rotational diffusion coefficient is on the order of the thermal one.

  19. Voice Identification: Levels-of-Processing and the Relationship between Prior Description Accuracy and Recognition Accuracy.

    ERIC Educational Resources Information Center

    Walter, Todd J.

    A study examined whether a person's ability to accurately identify a voice is influenced by factors similar to those proposed by the Supreme Court for eyewitness identification accuracy. In particular, the Supreme Court has suggested that a person's prior description accuracy of a suspect, degree of attention to a suspect, and confidence in…

  20. A multiscale red blood cell model with accurate mechanics, rheology, and dynamics.

    PubMed

    Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em

    2010-05-19

    Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  1. PB-AM: An open-source, fully analytical linear poisson-boltzmann solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felberg, Lisa E.; Brookes, David H.; Yap, Eng-Hui

    2016-11-02

    We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized Poisson Boltzmann equation. The PB-AM software package includes the generation of outputs files appropriate for visualization using VMD, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmannmore » Solver (APBS) software package to make it more accessible to a larger group of scientists, educators and students that are more familiar with the APBS framework.« less

  2. Digit-Length Ratios (2D:4D) as a Phenotypic Indicator of in Utero Androgen Exposure is Not Prognostic for Androgenic Alopecia: a Descriptive-Analytic Study of 1200 Iranian Men.

    PubMed

    Feily, Amir; Hosseinpoor, Masoomeh; Bakhti, Ali; Nekuyi, Mohamad; Sobhanian, Saeed; Fathinezhad, Zahra; Sahraei, Reza; Ramirez-Fort, Marigdalia K

    2016-06-15

    The etiology of androgenic alopecia (AGA) involves several factors, including genetics, androgens, age and nutrition. Digit-length ratio of the index and ring finger (2D:4D) is an indicator of prenatal exposure to sex hormones. There is a paucity of studies that systemically review the possible positive predictive value of 2D:4D in the development of AGA. We performed a single-site, descriptive-analytical study among a racially homogeneous population. Our results revealed that no significant association was determined between right 2D:4D and AGA severity within our entire population (P=0.384, r=0.025), however a positive correlation coefficient was identified in subjects above the age of 40. Based on the receiver operating characteristic curve analysis, 2D:4D does not predict the development of AGA. AGA is truly a multifactorial disease. Further, our findings suggest that increased in utero exposure to androgens as a fetus does not predispose men to develop AGA.

  3. Approximate Analytical Solutions for Hypersonic Flow Over Slender Power Law Bodies

    NASA Technical Reports Server (NTRS)

    Mirels, Harold

    1959-01-01

    Approximate analytical solutions are presented for two-dimensional and axisymmetric hypersonic flow over slender power law bodies. Both zero order (M approaches infinity) and first order (small but nonvanishing values of 1/(M(Delta)(sup 2) solutions are presented, where M is free-stream Mach number and Delta is a characteristic slope. These solutions are compared with exact numerical integration of the equations of motion and appear to be accurate particularly when the shock is relatively close to the body.

  4. Partially Coherent Scattering in Stellar Chromospheres. Part 4; Analytic Wing Approximations

    NASA Technical Reports Server (NTRS)

    Gayley, K. G.

    1993-01-01

    Simple analytic expressions are derived to understand resonance-line wings in stellar chromospheres and similar astrophysical plasmas. The results are approximate, but compare well with accurate numerical simulations. The redistribution is modeled using an extension of the partially coherent scattering approximation (PCS) which we term the comoving-frame partially coherent scattering approximation (CPCS). The distinction is made here because Doppler diffusion is included in the coherent/noncoherent decomposition, in a form slightly improved from the earlier papers in this series.

  5. Analytical model for describing ion guiding through capillaries in insulating polymers

    NASA Astrophysics Data System (ADS)

    Liu, Shi-Dong; Zhao, Yong-Tao; Wang, Yu-Yu; N, Stolterfoht; Cheng, Rui; Zhou, Xian-Ming; Xu, Hu-Shan; Xiao, Guo-Qing

    2015-08-01

    An analytical description for guiding of ions through nanocapillaries is given on the basis of previous work. The current entering into the capillary is assumed to be divided into a current fraction transmitted through the capillary, a current fraction flowing away via the capillary conductivity and a current fraction remaining within the capillary, which is responsible for its charge-up. The discharging current is assumed to be governed by the Frenkel-Poole process. At higher conductivities the analytical model shows a blocking of the ion transmission, which is in agreement with recent simulations. Also, it is shown that ion blocking observed in experiments is well reproduced by the analytical formula. Furthermore, the asymptotic fraction of transmitted ions is determined. Apart from the key controlling parameter (charge-to-energy ratio), the ratio of the capillary conductivity to the incident current is included in the model. Differences resulting from the nonlinear and linear limits of the Frenkel-Poole discharge are pointed out. Project supported by the Major State Basic Research Development Program of China (Grant No. 2010CB832902) and the National Natural Science Foundation of China (Grant Nos. 11275241, 11275238, 11105192, and 11375034).

  6. Analytical solution for wave propagation through a graded index interface between a right-handed and a left-handed material.

    PubMed

    Dalarsson, Mariana; Tassin, Philippe

    2009-04-13

    We have investigated the transmission and reflection properties of structures incorporating left-handed materials with graded index of refraction. We present an exact analytical solution to Helmholtz' equation for a graded index profile changing according to a hyperbolic tangent function along the propagation direction. We derive expressions for the field intensity along the graded index structure, and we show excellent agreement between the analytical solution and the corresponding results obtained by accurate numerical simulations. Our model straightforwardly allows for arbitrary spectral dispersion.

  7. Improved Analytical Potentials for the a ^3Σu+ and X ^1Σg+ States of {Cs_2}

    NASA Astrophysics Data System (ADS)

    Baldwin, Jesse; Le Roy, Robert J.

    2012-06-01

    Recent studies of the collisional properties of ultracold Cs atoms have led to a renewed interest in the singlet and triplet ground-state potential energy functions of Cs_2. Coxon and Hajigeorgiou recently determined an analytic potential function for the X ^1Σ_g^+ state that accurately reproduces a large body of spectroscopic data that spanned 99.45% of the potential well. However, their potential explicitly incorporates only the three leading inverse-power terms in the long-range potential, and does not distinguish between the three asymptotes associated with the different Cs atom spin states. Similarly, Xie et al. have reported two versions of an analytic potential energy function for the a ^3Σ_u^+ state that they determined from direct potential fits to emission data that spanned 93 % of its potential energy well. However, the tail of their potential function model was not constrained to have the inverse-power-sum form required by theory. Moreover, a physically correct description of cold atom collision phenomena requires the long-range inverse-power tails of these two potentials to be identical, and they are not. Thus, these functions cannot be expected to describe cold atom collision properties correctly. The present paper describes our efforts to determine improved analytic potential energy functions for these states that have identical long-range tails, and fully represent all of the spectroscopic data used in the earlier worka,b,c as well as photoassociation data that was not considered there and experimental values of the collisional scattering lengths for the two states. J. A. Coxon and P. Hajigeorgiou, J. Chem. Phys. 132, 09105 (2010). F. Xie et al. J. Chem. Phys. 130 051102 (2009). F. Xie et al. J. Chem. Phys. 135, 024303 (2011) J. G. Danzl et al., Science, 321, 1062 (2008). C. Chin, et al., Phys. Rev. Lett. 85, 2717 (2000) P. J. Leo, C. J. Williams, and P. S. Julienne, Phys. Rev. Lett. 85, 2721 (2000)

  8. Smartphone-based portable wireless optical system for the detection of target analytes.

    PubMed

    Gautam, Shreedhar; Batule, Bhagwan S; Kim, Hyo Yong; Park, Ki Soo; Park, Hyun Gyu

    2017-02-01

    Rapid and accurate on-site wireless measurement of hazardous molecules or biomarkers is one of the biggest challenges in nanobiotechnology. A novel smartphone-based Portable and Wireless Optical System (PAWS) for rapid, quantitative, and on-site analysis of target analytes is described. As a proof-of-concept, we employed gold nanoparticles (GNP) and an enzyme, horse radish peroxidase (HRP), to generate colorimetric signals in response to two model target molecules, melamine and hydrogen peroxide, respectively. The colorimetric signal produced by the presence of the target molecules is converted to an electrical signal by the inbuilt electronic circuit of the device. The converted electrical signal is then measured wirelessly via multimeter in the smartphone which processes the data and displays the results, including the concentration of analytes and its significance. This handheld device has great potential as a programmable and miniaturized platform to achieve rapid and on-site detection of various analytes in a point-of-care testing (POCT) manner. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Sample Preparation of Corn Seed Tissue to Prevent Analyte Relocations for Mass Spectrometry Imaging

    NASA Astrophysics Data System (ADS)

    Kim, Shin Hye; Kim, Jeongkwon; Lee, Young Jin; Lee, Tae Geol; Yoon, Sohee

    2017-08-01

    Corn seed tissue sections were prepared by the tape support method using an adhesive tape, and mass spectrometry imaging (MSI) was performed. The effect of heat generated during sample preparation was investigated by time-of-flight secondary mass spectrometry (TOF-SIMS) imaging of corn seed tissue prepared by the tape support and the thaw-mounted methods. Unlike thaw-mounted sample preparation, the tape support method does not cause imaging distortion because of the absence of heat, which can cause migration of the analytes on the sample. By applying the tape-support method, the corn seed tissue was prepared without structural damage and MSI with accurate spatial information of analytes was successfully performed.

  10. Calcium ions in aqueous solutions: Accurate force field description aided by ab initio molecular dynamics and neutron scattering

    NASA Astrophysics Data System (ADS)

    Martinek, Tomas; Duboué-Dijon, Elise; Timr, Štěpán; Mason, Philip E.; Baxová, Katarina; Fischer, Henry E.; Schmidt, Burkhard; Pluhařová, Eva; Jungwirth, Pavel

    2018-06-01

    We present a combination of force field and ab initio molecular dynamics simulations together with neutron scattering experiments with isotopic substitution that aim at characterizing ion hydration and pairing in aqueous calcium chloride and formate/acetate solutions. Benchmarking against neutron scattering data on concentrated solutions together with ion pairing free energy profiles from ab initio molecular dynamics allows us to develop an accurate calcium force field which accounts in a mean-field way for electronic polarization effects via charge rescaling. This refined calcium parameterization is directly usable for standard molecular dynamics simulations of processes involving this key biological signaling ion.

  11. User's guide and description of the streamline divergence computer program. [turbulent convective heat transfer

    NASA Technical Reports Server (NTRS)

    Sulyma, P. R.; Mcanally, J. V.

    1975-01-01

    The streamline divergence program was developed to demonstrate the capability to trace inviscid surface streamlines and to calculate outflow-corrected laminar and turbulent convective heating rates on surfaces subjected to exhaust plume impingement. The analytical techniques used in formulating this program are discussed. A brief description of the streamline divergence program is given along with a user's guide. The program input and output for a sample case are also presented.

  12. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  13. Let's Talk... Analytics

    ERIC Educational Resources Information Center

    Oblinger, Diana G.

    2012-01-01

    Talk about analytics seems to be everywhere. Everyone is talking about analytics. Yet even with all the talk, many in higher education have questions about--and objections to--using analytics in colleges and universities. In this article, the author explores the use of analytics in, and all around, higher education. (Contains 1 note.)

  14. Analytics for Education

    ERIC Educational Resources Information Center

    MacNeill, Sheila; Campbell, Lorna M.; Hawksey, Martin

    2014-01-01

    This article presents an overview of the development and use of analytics in the context of education. Using Buckingham Shum's three levels of analytics, the authors present a critical analysis of current developments in the domain of learning analytics, and contrast the potential value of analytics research and development with real world…

  15. The "Journal of Learning Analytics": Supporting and Promoting Learning Analytics Research

    ERIC Educational Resources Information Center

    Siemens, George

    2014-01-01

    The paper gives a brief overview of the main activities for the development of the emerging field of learning analytics led by the Society for Learning Analytics Research (SoLAR). The place of the "Journal of Learning Analytics" is identified. Analytics is the most significant new initiative of SoLAR.

  16. Numerical Test of Analytical Theories for Perpendicular Diffusion in Small Kubo Number Turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heusen, M.; Shalchi, A., E-mail: husseinm@myumanitoba.ca, E-mail: andreasm4@yahoo.com

    In the literature, one can find various analytical theories for perpendicular diffusion of energetic particles interacting with magnetic turbulence. Besides quasi-linear theory, there are different versions of the nonlinear guiding center (NLGC) theory and the unified nonlinear transport (UNLT) theory. For turbulence with high Kubo numbers, such as two-dimensional turbulence or noisy reduced magnetohydrodynamic turbulence, the aforementioned nonlinear theories provide similar results. For slab and small Kubo number turbulence, however, this is not the case. In the current paper, we compare different linear and nonlinear theories with each other and test-particle simulations for a noisy slab model corresponding to smallmore » Kubo number turbulence. We show that UNLT theory agrees very well with all performed test-particle simulations. In the limit of long parallel mean free paths, the perpendicular mean free path approaches asymptotically the quasi-linear limit as predicted by the UNLT theory. For short parallel mean free paths we find a Rechester and Rosenbluth type of scaling as predicted by UNLT theory as well. The original NLGC theory disagrees with all performed simulations regardless what the parallel mean free path is. The random ballistic interpretation of the NLGC theory agrees much better with the simulations, but compared to UNLT theory the agreement is inferior. We conclude that for this type of small Kubo number turbulence, only the latter theory allows for an accurate description of perpendicular diffusion.« less

  17. An Economical Semi-Analytical Orbit Theory for Retarded Satellite Motion About an Oblate Planet

    NASA Technical Reports Server (NTRS)

    Gordon, R. A.

    1980-01-01

    Brouwer and Brouwer-Lyddanes' use of the Von Zeipel-Delaunay method is employed to develop an efficient analytical orbit theory suitable for microcomputers. A succinctly simple pseudo-phenomenologically conceptualized algorithm is introduced which accurately and economically synthesizes modeling of drag effects. The method epitomizes and manifests effortless efficient computer mechanization. Simulated trajectory data is employed to illustrate the theory's ability to accurately accommodate oblateness and drag effects for microcomputer ground based or onboard predicted orbital representation. Real tracking data is used to demonstrate that the theory's orbit determination and orbit prediction capabilities are favorably adaptable to and are comparable with results obtained utilizing complex definitive Cowell method solutions on satellites experiencing significant drag effects.

  18. Targeted analyte detection by standard addition improves detection limits in matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Toghi Eshghi, Shadi; Li, Xingde; Zhang, Hui

    2012-09-18

    Matrix-assisted laser desorption/ionization (MALDI) has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications.

  19. A simple analytical model for signal amplification by reversible exchange (SABRE) process.

    PubMed

    Barskiy, Danila A; Pravdivtsev, Andrey N; Ivanov, Konstantin L; Kovtunov, Kirill V; Koptyug, Igor V

    2016-01-07

    We demonstrate an analytical model for the description of the signal amplification by reversible exchange (SABRE) process. The model relies on a combined analysis of chemical kinetics and the evolution of the nuclear spin system during the hyperpolarization process. The presented model for the first time provides rationale for deciding which system parameters (i.e. J-couplings, relaxation rates, reaction rate constants) have to be optimized in order to achieve higher signal enhancement for a substrate of interest in SABRE experiments.

  20. An analytical model for scanning electron microscope Type I magnetic contrast with energy filtering

    NASA Astrophysics Data System (ADS)

    Chim, W. K.

    1994-02-01

    In this article, a theoretical model for type I magnetic contrast calculations in the scanning electron microscope with energy filtering is presented. This model uses an approximate form of the secondary electron (SE) energy distribution by Chung and Everhart [M. S. Chung and T. E. Everhart, J. Appl. Phys. 45, 707 (1974). Closed form analytical expressions for the contrast and quality factors, which take into consideration the work function and field-distance integral of the material being studied, are obtained. This analytical model is compared with that of a more accurate numerical model. Results showed that the contrast and quality factors for the analytical model differed by not more than 20% from the numerical model, with the actual difference depending on the range of filtered SE energies considered. This model has also been extended to the situation of a two-detector (i.e., detector A and B) configuration, in which enhanced magnetic contrast and quality factor can be obtained by operating in the ``A-B'' mode.

  1. Influence of Pre-Analytical Factors on Thymus- and Activation-Regulated Chemokine Quantitation in Plasma

    PubMed Central

    Zhao, Xuemei; Delgado, Liliana; Weiner, Russell; Laterza, Omar F.

    2015-01-01

    Thymus- and activation-regulated chemokine (TARC) in serum/plasma associates with the disease activity of atopic dermatitis (AD), and is a promising tool for assessing the response to the treatment of the disease. TARC also exists within platelets, with elevated levels detectable in AD patients. We examined the effects of pre-analytical factors on the quantitation of TARC in human EDTA plasma. TARC levels in platelet-free plasma were significantly lower than those in platelet-containing plasma. After freeze-thaw, TARC levels increased in platelet-containing plasma, but remained unchanged in platelet-free plasma, suggesting TARC was released from the platelets during the freeze-thaw process. In contrast, TARC levels were stable in serum independent of freeze-thaw. These findings underscore the importance of pre-analytical factors to TARC quantitation. Plasma TARC levels should be measured in platelet-free plasma for accurate quantitation. Pre-analytical factors influence the quantitation, interpretation, and implementation of circulating TARC as a biomarker for the development of AD therapeutics. PMID:28936246

  2. Analytical halo model of galactic conformity

    NASA Astrophysics Data System (ADS)

    Pahwa, Isha; Paranjape, Aseem

    2017-09-01

    We present a fully analytical halo model of colour-dependent clustering that incorporates the effects of galactic conformity in a halo occupation distribution framework. The model, based on our previous numerical work, describes conformity through a correlation between the colour of a galaxy and the concentration of its parent halo, leading to a correlation between central and satellite galaxy colours at fixed halo mass. The strength of the correlation is set by a tunable 'group quenching efficiency', and the model can separately describe group-level correlations between galaxy colour (1-halo conformity) and large-scale correlations induced by assembly bias (2-halo conformity). We validate our analytical results using clustering measurements in mock galaxy catalogues, finding that the model is accurate at the 10-20 per cent level for a wide range of luminosities and length-scales. We apply the formalism to interpret the colour-dependent clustering of galaxies in the Sloan Digital Sky Survey (SDSS). We find good overall agreement between the data and a model that has 1-halo conformity at a level consistent with previous results based on an SDSS group catalogue, although the clustering data require satellites to be redder than suggested by the group catalogue. Within our modelling uncertainties, however, we do not find strong evidence of 2-halo conformity driven by assembly bias in SDSS clustering.

  3. Nuclear analytical techniques in medicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cesareo, R.

    1988-01-01

    This book acquaints one with the fundamental principles and the instrumentation relevant to analytical technique based on atomic and nuclear physics, as well as present and future biomedical applications. Besides providing a theoretical description of the physical phenomena, a large part of the book is devoted to applications in the medical and biological field, particularly in hematology, forensic medicine and environmental science. This volume reviews methods such as the possibility of carrying out rapid multi-element analysis of trace elements on biomedical samples, in vitro and in vivo, by XRF-analysis; the ability of the PIXE-microprobe to analyze in detail and tomore » map trace elements in fragments of biomedical samples or inside the cells; the potentiality of in vivo nuclear activation analysis for diagnostic purposes. Finally, techniques are described such as radiation scattering (elastic and inelastic scattering) and attenuation measurements which will undoubtedly see great development in the immediate future.« less

  4. Development of a standardized job description for healthcare managers of metabolic syndrome management programs in Korean community health centers.

    PubMed

    Lee, Youngjin; Choo, Jina; Cho, Jeonghyun; Kim, So-Nam; Lee, Hye-Eun; Yoon, Seok-Jun; Seomun, GyeongAe

    2014-03-01

    This study aimed to develop a job description for healthcare managers of metabolic syndrome management programs using task analysis. Exploratory research was performed by using the Developing a Curriculum method, the Intervention Wheel model, and focus group discussions. Subsequently, we conducted a survey of 215 healthcare workers from 25 community health centers to verify that the job description we created was accurate. We defined the role of healthcare managers. Next, we elucidated the tasks of healthcare managers and performed needs analysis to examine the frequency, importance, and difficulty of each of their duties. Finally, we verified that our job description was accurate. Based on the 8 duties, 30 tasks, and 44 task elements assigned to healthcare managers, we found that the healthcare managers functioned both as team coordinators responsible for providing multidisciplinary health services and nurse specialists providing health promotion services. In terms of importance and difficulty of tasks performed by the healthcare managers, which were measured using a determinant coefficient, the highest-ranked task was planning social marketing (15.4), while the lowest-ranked task was managing human resources (9.9). A job description for healthcare managers may provide basic data essential for the development of a job training program for healthcare managers working in community health promotion programs. Copyright © 2014. Published by Elsevier B.V.

  5. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    PubMed

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  6. QUESP and QUEST revisited - fast and accurate quantitative CEST experiments.

    PubMed

    Zaiss, Moritz; Angelovski, Goran; Demetriou, Eleni; McMahon, Michael T; Golay, Xavier; Scheffler, Klaus

    2018-03-01

    Chemical exchange saturation transfer (CEST) NMR or MRI experiments allow detection of low concentrated molecules with enhanced sensitivity via their proton exchange with the abundant water pool. Be it endogenous metabolites or exogenous contrast agents, an exact quantification of the actual exchange rate is required to design optimal pulse sequences and/or specific sensitive agents. Refined analytical expressions allow deeper insight and improvement of accuracy for common quantification techniques. The accuracy of standard quantification methodologies, such as quantification of exchange rate using varying saturation power or varying saturation time, is improved especially for the case of nonequilibrium initial conditions and weak labeling conditions, meaning the saturation amplitude is smaller than the exchange rate (γB 1  < k). The improved analytical 'quantification of exchange rate using varying saturation power/time' (QUESP/QUEST) equations allow for more accurate exchange rate determination, and provide clear insights on the general principles to execute the experiments and to perform numerical evaluation. The proposed methodology was evaluated on the large-shift regime of paramagnetic chemical-exchange-saturation-transfer agents using simulated data and data of the paramagnetic Eu(III) complex of DOTA-tetraglycineamide. The refined formulas yield improved exchange rate estimation. General convergence intervals of the methods that would apply for smaller shift agents are also discussed. Magn Reson Med 79:1708-1721, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. A Strategy for Incorporating Learning Analytics into the Design and Evaluation of a K-12 Science Curriculum

    ERIC Educational Resources Information Center

    Monroy, Carlos; Rangel, Virginia Snodgrass; Whitaker, Reid

    2014-01-01

    In this paper, we discuss a scalable approach for integrating learning analytics into an online K-12 science curriculum. A description of the curriculum and the underlying pedagogical framework is followed by a discussion of the challenges to be tackled as part of this integration. We include examples of data visualization based on teacher usage…

  8. Analytical model of diffuse reflectance spectrum of skin tissue

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.; Kugeiko, M. M.; Firago, V. A.; Sobchuk, A. N.

    2014-01-01

    We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions.

  9. Accurate determination of selected pesticides in soya beans by liquid chromatography coupled to isotope dilution mass spectrometry.

    PubMed

    Huertas Pérez, J F; Sejerøe-Olsen, B; Fernández Alba, A R; Schimmel, H; Dabrio, M

    2015-05-01

    A sensitive, accurate and simple liquid chromatography coupled with mass spectrometry method for the determination of 10 selected pesticides in soya beans has been developed and validated. The method is intended for use during the characterization of selected pesticides in a reference material. In this process, high accuracy and appropriate uncertainty levels associated to the analytical measurements are of utmost importance. The analytical procedure is based on sample extraction by the use of a modified QuEChERS (quick, easy, cheap, effective, rugged, safe) extraction and subsequent clean-up of the extract with C18, PSA and Florisil. Analytes were separated on a C18 column using gradient elution with water-methanol/2.5 mM ammonium acetate mobile phase, and finally identified and quantified by triple quadrupole mass spectrometry in the multiple reaction monitoring mode (MRM). Reliable and accurate quantification of the analytes was achieved by means of stable isotope-labelled analogues employed as internal standards (IS) and calibration with pure substance solutions containing both, the isotopically labelled and native compounds. Exceptions were made for thiodicarb and malaoxon where the isotopically labelled congeners were not commercially available at the time of analysis. For the quantification of those compounds methomyl-(13)C2(15)N and malathion-D10 were used respectively. The method was validated according to the general principles covered by DG SANCO guidelines. However, validation criteria were set more stringently. Mean recoveries were in the range of 86-103% with RSDs lower than 8.1%. Repeatability and intermediate precision were in the range of 3.9-7.6% and 1.9-8.7% respectively. LODs were theoretically estimated and experimentally confirmed to be in the range 0.001-0.005 mg kg(-1) in the matrix, while LOQs established as the lowest spiking mass fractionation level were in the range 0.01-0.05 mg kg(-1). The method reliably identifies and quantifies the

  10. An UPLC-MS/MS method for separation and accurate quantification of tamoxifen and its metabolites isomers.

    PubMed

    Arellano, Cécile; Allal, Ben; Goubaa, Anwar; Roché, Henri; Chatelut, Etienne

    2014-11-01

    A selective and accurate analytical method is needed to quantify tamoxifen and its phase I metabolites in a prospective clinical protocol, for evaluation of pharmacokinetic parameters of tamoxifen and its metabolites in adjuvant treatment of breast cancer. The selectivity of the analytical method is a fundamental criteria to allow the quantification of the main active metabolites (Z)-isomers from (Z)'-isomers. An UPLC-MS/MS method was developed and validated for the quantification of (Z)-tamoxifen, (Z)-endoxifen, (E)-endoxifen, Z'-endoxifen, (Z)'-endoxifen, (Z)-4-hydroxytamoxifen, (Z)-4'-hydroxytamoxifen, N-desmethyl tamoxifen, and tamoxifen-N-oxide. The validation range was set between 0.5ng/mL and 125ng/mL for 4-hydroxytamoxifen and endoxifen isomers, and between 12.5ng/mL and 300ng/mL for tamoxifen, tamoxifen N-desmethyl and tamoxifen-N-oxide. The application to patient plasma samples was performed. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Feasibility model of a high reliability five-year tape transport. Volume 3: Appendices. [detailed drawing and analytical tools used in analyses

    NASA Technical Reports Server (NTRS)

    Meyers, A. P.; Davidson, W. A.; Gortowski, R. C.

    1973-01-01

    Detailed drawings of the five year tape transport are presented. Analytical tools used in the various analyses are described. These analyses include: tape guidance, tape stress over crowned rollers, tape pack stress program, response (computer) program, and control system electronics description.

  12. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Jianwei; Remsing, Richard C.; Zhang, Yubo

    2016-06-13

    One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and vanmore » der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.« less

  13. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional.

    PubMed

    Sun, Jianwei; Remsing, Richard C; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L; Perdew, John P

    2016-09-01

    One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.

  14. Spherical indentation of a freestanding circular membrane revisited: Analytical solutions and experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Congrui; Davoodabadi, Ali; Li, Jianlin

    Because of the development of novel micro-fabrication techniques to produce ultra-thin materials and increasing interest in thin biological membranes, in recent years, the mechanical characterization of thin films has received a significant amount of attention. To provide a more accurate solution for the relationship among contact radius, load and deflection, the fundamental and widely applicable problem of spherical indentation of a freestanding circular membrane have been revisited. The work presented here significantly extends the previous contributions by providing an exact analytical solution to the governing equations of Föppl–Hecky membrane indented by a frictionless spherical indenter. In this study, experiments ofmore » spherical indentation has been performed, and the exact analytical solution presented in this article is compared against experimental data from existing literature as well as our own experimental results.« less

  15. Analytic score distributions for a spatially continuous tridirectional Monte Carol transport problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, T.E.

    1996-01-01

    The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large-score sampling from the score distribution`s tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. Here, the analytic score distribution for the exponential transform applied to a simple, spatially continuous Monte Carlo transport problem is provided.more » Anisotropic scattering and implicit capture are included in the theory. In large part, the analytic score distributions that are derived provide the basis for the ten new statistical quality checks in MCNP.« less

  16. Spherical indentation of a freestanding circular membrane revisited: Analytical solutions and experiments

    DOE PAGES

    Jin, Congrui; Davoodabadi, Ali; Li, Jianlin; ...

    2017-01-11

    Because of the development of novel micro-fabrication techniques to produce ultra-thin materials and increasing interest in thin biological membranes, in recent years, the mechanical characterization of thin films has received a significant amount of attention. To provide a more accurate solution for the relationship among contact radius, load and deflection, the fundamental and widely applicable problem of spherical indentation of a freestanding circular membrane have been revisited. The work presented here significantly extends the previous contributions by providing an exact analytical solution to the governing equations of Föppl–Hecky membrane indented by a frictionless spherical indenter. In this study, experiments ofmore » spherical indentation has been performed, and the exact analytical solution presented in this article is compared against experimental data from existing literature as well as our own experimental results.« less

  17. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Xia, Changliang; Yan, Yan; Geng, Qiang; Shi, Tingna

    2017-08-01

    Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff's law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell's equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  18. xQuake: A Modern Approach to Seismic Network Analytics

    NASA Astrophysics Data System (ADS)

    Johnson, C. E.; Aikin, K. E.

    2017-12-01

    While seismic networks have expanded over the past few decades, and social needs for accurate and timely information has increased dramatically, approaches to the operational needs of both global and regional seismic observatories have been slow to adopt new technologies. This presentation presents the xQuake system that provides a fresh approach to seismic network analytics based on complexity theory and an adaptive architecture of streaming connected microservices as diverse data (picks, beams, and other data) flow into a final, curated catalog of events. The foundation for xQuake is the xGraph (executable graph) framework that is essentially a self-organizing graph database. An xGraph instance provides both the analytics as well as the data storage capabilities at the same time. Much of the analytics, such as synthetic annealing in the detection process and an evolutionary programing approach for event evolution, draws from the recent GLASS 3.0 seismic associator developed by and for the USGS National Earthquake Information Center (NEIC). In some respects xQuake is reminiscent of the Earthworm system, in that it comprises processes interacting through store and forward rings; not surprising as the first author was the lead architect of the original Earthworm project when it was known as "Rings and Things". While Earthworm components can easily be integrated into the xGraph processing framework, the architecture and analytics are more current (e.g. using a Kafka Broker for store and forward rings). The xQuake system is being released under an unrestricted open source license to encourage and enable sthe eismic community support in further development of its capabilities.

  19. Analytical quality by design: a tool for regulatory flexibility and robust analytics.

    PubMed

    Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).

  20. Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics

    PubMed Central

    Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723

  1. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  2. Quantitative 1H NMR: Development and Potential of an Analytical Method – an Update

    PubMed Central

    Pauli, Guido F.; Gödecke, Tanja; Jaki, Birgit U.; Lankin, David C.

    2012-01-01

    Covering the literature from mid-2004 until the end of 2011, this review continues a previous literature overview on quantitative 1H NMR (qHNMR) methodology and its applications in the analysis of natural products (NPs). Among the foremost advantages of qHNMR is its accurate function with external calibration, the lack of any requirement for identical reference materials, a high precision and accuracy when properly validated, and an ability to quantitate multiple analytes simultaneously. As a result of the inclusion of over 170 new references, this updated review summarizes a wealth of detailed experiential evidence and newly developed methodology that supports qHNMR as a valuable and unbiased analytical tool for natural product and other areas of research. PMID:22482996

  3. Analytic halo approach to the bispectrum of galaxies in redshift space

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kazuhiro; Nan, Yue; Hikage, Chiaki

    2017-02-01

    We present an analytic formula for the galaxy bispectrum in redshift space on the basis of the halo approach description with the halo occupation distribution of central galaxies and satellite galaxies. This work is an extension of a previous work on the galaxy power spectrum, which illuminated the significant contribution of satellite galaxies to the higher multipole spectrum through the nonlinear redshift space distortions of their random motions. Behaviors of the multipoles of the bispectrum are compared with results of numerical simulations assuming a halo occupation distribution of the low-redshift (LOWZ) sample of the Sloan Digital Sky Survey (SDSS) III baryon oscillation spectroscopic survey (BOSS) survey. Also presented are analytic approximate formulas for the multipoles of the bispectrum, which is useful to understanding their characteristic properties. We demonstrate that the Fingers of God effect is quite important for the higher multipoles of the bispectrum in redshift space, depending on the halo occupation distribution parameters.

  4. Evaluation of one dimensional analytical models for vegetation canopies

    NASA Technical Reports Server (NTRS)

    Goel, Narendra S.; Kuusk, Andres

    1992-01-01

    The SAIL model for one-dimensional homogeneous vegetation canopies has been modified to include the specular reflectance and hot spot effects. This modified model and the Nilson-Kuusk model are evaluated by comparing the reflectances given by them against those given by a radiosity-based computer model, Diana, for a set of canopies, characterized by different leaf area index (LAI) and leaf angle distribution (LAD). It is shown that for homogeneous canopies, the analytical models are generally quite accurate in the visible region, but not in the infrared region. For architecturally realistic heterogeneous canopies of the type found in nature, these models fall short. These shortcomings are quantified.

  5. HPAEC-PAD for oligosaccharide analysis-novel insights into analyte sensitivity and response stability.

    PubMed

    Mechelke, Matthias; Herlet, Jonathan; Benz, J Philipp; Schwarz, Wolfgang H; Zverlov, Vladimir V; Liebl, Wolfgang; Kornberger, Petra

    2017-12-01

    The rising importance of accurately detecting oligosaccharides in biomass hydrolyzates or as ingredients in food, such as in beverages and infant milk products, demands for the availability of tools to sensitively analyze the broad range of available oligosaccharides. Over the last decades, HPAEC-PAD has been developed into one of the major technologies for this task and represents a popular alternative to state-of-the-art LC-MS oligosaccharide analysis. This work presents the first comprehensive study which gives an overview of the separation of 38 analytes as well as enzymatic hydrolyzates of six different polysaccharides focusing on oligosaccharides. The high sensitivity of the PAD comes at cost of its stability due to recession of the gold electrode. By an in-depth analysis of the sensitivity drop over time for 35 analytes, including xylo- (XOS), arabinoxylo- (AXOS), laminari- (LOS), manno- (MOS), glucomanno- (GMOS), and cellooligosaccharides (COS), we developed an analyte-specific one-phase decay model for this effect over time. Using this model resulted in significantly improved data normalization when using an internal standard. Our results thereby allow a quantification approach which takes the inevitable and analyte-specific PAD response drop into account. Graphical abstract HPAEC-PAD analysis of oligosaccharides and determination of PAD response drop leading to an improved data normalization.

  6. Accurate analytical periodic solution of the elliptical Kepler equation using the Adomian decomposition method

    NASA Astrophysics Data System (ADS)

    Alshaery, Aisha; Ebaid, Abdelhalim

    2017-11-01

    Kepler's equation is one of the fundamental equations in orbital mechanics. It is a transcendental equation in terms of the eccentric anomaly of a planet which orbits the Sun. Determining the position of a planet in its orbit around the Sun at a given time depends upon the solution of Kepler's equation, which we will solve in this paper by the Adomian decomposition method (ADM). Several properties of the periodicity of the obtained approximate solutions have been proved in lemmas. Our calculations demonstrated a rapid convergence of the obtained approximate solutions which are displayed in tables and graphs. Also, it has been shown in this paper that only a few terms of the Adomian decomposition series are sufficient to achieve highly accurate numerical results for any number of revolutions of the Earth around the Sun as a consequence of the periodicity property. Numerically, the four-term approximate solution coincides with the Bessel-Fourier series solution in the literature up to seven decimal places at some values of the time parameter and nine decimal places at other values. Moreover, the absolute error approaches zero using the nine term approximate Adomian solution. In addition, the approximate Adomian solutions for the eccentric anomaly have been used to show the convergence of the approximate radial distances of the Earth from the Sun for any number of revolutions. The minimal distance (perihelion) and maximal distance (aphelion) approach 147 million kilometers and 152.505 million kilometers, respectively, and these coincide with the well known results in astronomical physics. Therefore, the Adomian decomposition method is validated as an effective tool to solve Kepler's equation for elliptical orbits.

  7. Propellant Chemistry for CFD Applications

    NASA Technical Reports Server (NTRS)

    Farmer, R. C.; Anderson, P. G.; Cheng, Gary C.

    1996-01-01

    Current concepts for reusable launch vehicle design have created renewed interest in the use of RP-1 fuels for high pressure and tri-propellant propulsion systems. Such designs require the use of an analytical technology that accurately accounts for the effects of real fluid properties, combustion of large hydrocarbon fuel modules, and the possibility of soot formation. These effects are inadequately treated in current computational fluid dynamic (CFD) codes used for propulsion system analyses. The objective of this investigation is to provide an accurate analytical description of hydrocarbon combustion thermodynamics and kinetics that is sufficiently computationally efficient to be a practical design tool when used with CFD codes such as the FDNS code. A rigorous description of real fluid properties for RP-1 and its combustion products will be derived from the literature and from experiments conducted in this investigation. Upon the establishment of such a description, the fluid description will be simplified by using the minimum of empiricism necessary to maintain accurate combustion analyses and including such empirical models into an appropriate CFD code. An additional benefit of this approach is that the real fluid properties analysis simplifies the introduction of the effects of droplet sprays into the combustion model. Typical species compositions of RP-1 have been identified, surrogate fuels have been established for analyses, and combustion and sooting reaction kinetics models have been developed. Methods for predicting the necessary real fluid properties have been developed and essential experiments have been designed. Verification studies are in progress, and preliminary results from these studies will be presented. The approach has been determined to be feasible, and upon its completion the required methodology for accurate performance and heat transfer CFD analyses for high pressure, tri-propellant propulsion systems will be available.

  8. A new approach to analytic, non-perturbative and gauge-invariant QCD

    NASA Astrophysics Data System (ADS)

    Fried, H. M.; Grandou, T.; Sheu, Y.-M.

    2012-11-01

    Following a previous calculation of quark scattering in eikonal approximation, this paper presents a new, analytic and rigorous approach to the calculation of QCD phenomena. In this formulation a basic distinction between the conventional "idealistic" description of QCD and a more "realistic" description is brought into focus by a non-perturbative, gauge-invariant evaluation of the Schwinger solution for the QCD generating functional in terms of the exact Fradkin representations of Green's functional G(x,y|A) and the vacuum functional L[A]. Because quarks exist asymptotically only in bound states, their transverse coordinates can never be measured with arbitrary precision; the non-perturbative neglect of this statement leads to obstructions that are easily corrected by invoking in the basic Lagrangian a probability amplitude which describes such transverse imprecision. The second result of this non-perturbative analysis is the appearance of a new and simplifying output called "Effective Locality", in which the interactions between quarks by the exchange of a "gluon bundle"-which "bundle" contains an infinite number of gluons, including cubic and quartic gluon interactions-display an exact locality property that reduces the several functional integrals of the formulation down to a set of ordinary integrals. It should be emphasized that "non-perturbative" here refers to the effective summation of all gluons between a pair of quark lines-which may be the same quark line, as in a self-energy graph-but does not (yet) include a summation over all closed-quark loops which are tied by gluon-bundle exchange to the rest of the "Bundle Diagram". As an example of the power of these methods we offer as a first analytic calculation the quark-antiquark binding potential of a pion, and the corresponding three-quark binding potential of a nucleon, obtained in a simple way from relevant eikonal scattering approximations. A second calculation, analytic, non-perturbative and gauge

  9. Electrical wave propagation in an anisotropic model of the left ventricle based on analytical description of cardiac architecture.

    PubMed

    Pravdin, Sergey F; Dierckx, Hans; Katsnelson, Leonid B; Solovyova, Olga; Markhasin, Vladimir S; Panfilov, Alexander V

    2014-01-01

    We develop a numerical approach based on our recent analytical model of fiber structure in the left ventricle of the human heart. A special curvilinear coordinate system is proposed to analytically include realistic ventricular shape and myofiber directions. With this anatomical model, electrophysiological simulations can be performed on a rectangular coordinate grid. We apply our method to study the effect of fiber rotation and electrical anisotropy of cardiac tissue (i.e., the ratio of the conductivity coefficients along and across the myocardial fibers) on wave propagation using the ten Tusscher-Panfilov (2006) ionic model for human ventricular cells. We show that fiber rotation increases the speed of cardiac activation and attenuates the effects of anisotropy. Our results show that the fiber rotation in the heart is an important factor underlying cardiac excitation. We also study scroll wave dynamics in our model and show the drift of a scroll wave filament whose velocity depends non-monotonically on the fiber rotation angle; the period of scroll wave rotation decreases with an increase of the fiber rotation angle; an increase in anisotropy may cause the breakup of a scroll wave, similar to the mother rotor mechanism of ventricular fibrillation.

  10. Electrical Wave Propagation in an Anisotropic Model of the Left Ventricle Based on Analytical Description of Cardiac Architecture

    PubMed Central

    Pravdin, Sergey F.; Dierckx, Hans; Katsnelson, Leonid B.; Solovyova, Olga; Markhasin, Vladimir S.; Panfilov, Alexander V.

    2014-01-01

    We develop a numerical approach based on our recent analytical model of fiber structure in the left ventricle of the human heart. A special curvilinear coordinate system is proposed to analytically include realistic ventricular shape and myofiber directions. With this anatomical model, electrophysiological simulations can be performed on a rectangular coordinate grid. We apply our method to study the effect of fiber rotation and electrical anisotropy of cardiac tissue (i.e., the ratio of the conductivity coefficients along and across the myocardial fibers) on wave propagation using the ten Tusscher–Panfilov (2006) ionic model for human ventricular cells. We show that fiber rotation increases the speed of cardiac activation and attenuates the effects of anisotropy. Our results show that the fiber rotation in the heart is an important factor underlying cardiac excitation. We also study scroll wave dynamics in our model and show the drift of a scroll wave filament whose velocity depends non-monotonically on the fiber rotation angle; the period of scroll wave rotation decreases with an increase of the fiber rotation angle; an increase in anisotropy may cause the breakup of a scroll wave, similar to the mother rotor mechanism of ventricular fibrillation. PMID:24817308

  11. World, Time And Anxiety. Heidegger's Existential Analytic And Psychiatry.

    PubMed

    Brencio, Francesca

    2014-01-01

    Martin Heidegger was one of the most influential but also criticized philosophers of the XX century. With Being and Time 1927 he sets apart his existential analytic from psychology as well as from anthropology and from the other human sciences that deny the ontological foundation, overcoming the Cartesian dualism in search of the ontological unit of an articulated multiplicity, as human being is. Heidegger's Dasein Analytic defines the fundamental structures of human being such as being-in-the-world, a unitary structure that discloses the worldhood of the world; the modes of being (Seinsweisen), such as fear (Furcht) and anxiety (Angst); and the relationship between existence and time. In his existential analytic, anxiety is one of the fundamental moods (Grundbefindlichkeit) and it plays a pivotal role in the relationship of Dasein with time and world. The paper firstly focuses on the modes of being, underlining the importance of anxiety for the constitution of human being; secondly, it shows the relationship between anxiety and the world, and anxiety and time: rejecting both the Aristotelian description of time, as a sequence of moments that informs our common understanding of time, and the Augustine's mental account of inner time, Heidegger considers temporality under a transcendental point of view. Temporality is ek-static, it is a process through which human being comes toward and back to itself, letting itself encounter the world and the entities. The transcendental interpretation of time provided by Heidegger may give its important contribution to psychopathology.

  12. An analytic model for buoyancy resonances in protoplanetary disks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lubow, Stephen H.; Zhu, Zhaohuan, E-mail: lubow@stsci.edu, E-mail: zhzhu@astro.princeton.edu

    2014-04-10

    Zhu et al. found in three-dimensional shearing box simulations a new form of planet-disk interaction that they attributed to a vertical buoyancy resonance in the disk. We describe an analytic linear model for this interaction. We adopt a simplified model involving azimuthal forcing that produces the resonance and permits an analytic description of its structure. We derive an analytic expression for the buoyancy torque and show that the vertical torque distribution agrees well with the results of the Athena simulations and a Fourier method for linear numerical calculations carried out with the same forcing. The buoyancy resonance differs from themore » classic Lindblad and corotation resonances in that the resonance lies along tilted planes. Its width depends on damping effects and is independent of the gas sound speed. The resonance does not excite propagating waves. At a given large azimuthal wavenumber k{sub y} > h {sup –1} (for disk thickness h), the buoyancy resonance exerts a torque over a region that lies radially closer to the corotation radius than the Lindblad resonance. Because the torque is localized to the region of excitation, it is potentially subject to the effects of nonlinear saturation. In addition, the torque can be reduced by the effects of radiative heat transfer between the resonant region and its surroundings. For each azimuthal wavenumber, the resonance establishes a large scale density wave pattern in a plane within the disk.« less

  13. Analytical method of waste allocation in waste management systems: Concept, method and case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergeron, Francis C., E-mail: francis.b.c@videotron.ca

    Waste is not a rejected item to dispose anymore but increasingly a secondary resource to exploit, influencing waste allocation among treatment operations in a waste management (WM) system. The aim of this methodological paper is to present a new method for the assessment of the WM system, the “analytical method of the waste allocation process” (AMWAP), based on the concept of the “waste allocation process” defined as the aggregation of all processes of apportioning waste among alternative waste treatment operations inside or outside the spatial borders of a WM system. AMWAP contains a conceptual framework and an analytical approach. Themore » conceptual framework includes, firstly, a descriptive model that focuses on the description and classification of the WM system. It includes, secondly, an explanatory model that serves to explain and to predict the operation of the WM system. The analytical approach consists of a step-by-step analysis for the empirical implementation of the conceptual framework. With its multiple purposes, AMWAP provides an innovative and objective modular method to analyse a WM system which may be integrated in the framework of impact assessment methods and environmental systems analysis tools. Its originality comes from the interdisciplinary analysis of the WAP and to develop the conceptual framework. AMWAP is applied in the framework of an illustrative case study on the household WM system of Geneva (Switzerland). It demonstrates that this method provides an in-depth and contextual knowledge of WM. - Highlights: • The study presents a new analytical method based on the waste allocation process. • The method provides an in-depth and contextual knowledge of the waste management system. • The paper provides a reproducible procedure for professionals, experts and academics. • It may be integrated into impact assessment or environmental system analysis tools. • An illustrative case study is provided based on household waste

  14. Application of the weighted-density approximation to the accurate description of electron-positron correlation effects in materials

    NASA Astrophysics Data System (ADS)

    Callewaert, Vincent; Saniz, Rolando; Barbiellini, Bernardo; Bansil, Arun; Partoens, Bart

    2017-08-01

    We discuss positron-annihilation lifetimes for a set of illustrative bulk materials within the framework of the weighted-density approximation (WDA). The WDA can correctly describe electron-positron correlations in strongly inhomogeneous systems, such as surfaces, where the applicability of (semi-)local approximations is limited. We analyze the WDA in detail and show that the electrons which cannot screen external charges efficiently, such as the core electrons, cannot be treated accurately via the pair correlation of the homogeneous electron gas. We discuss how this problem can be addressed by reducing the screening in the homogeneous electron gas by adding terms depending on the gradient of the electron density. Further improvements are obtained when core electrons are treated within the LDA and the valence electron using the WDA. Finally, we discuss a semiempirical WDA-based approach in which a sum rule is imposed to reproduce the experimental lifetimes.

  15. Development and analytical performance evaluation of FREND-SAA and FREND-Hp

    NASA Astrophysics Data System (ADS)

    Choi, Eunha; Seong, Jihyun; Lee, Seiyoung; Han, Sunmi

    2017-07-01

    The FREND System is a portable cartridge reader, quantifying analytes by measuring laser-induced fluorescence in a single-use reagent cartridge. The objective of this study was to evaluate FREND-SAA and FREND-Hp assays. The FREND-SAA and Hp assays were standardized to the WHO and IFCC reference materials. Analytical performance studies of Precision, Linearity, Limits of Detections, Interferences, and Method Comparisons for both assays were performed according to the CLSI guidelines. Both assays demonstrated acceptable imprecision of %CV in three different levels of samples. The linearity of the assays was found to be acceptable (SAA 5 150 mg/L, Hp 30 400 mg/dL). The detection limits were 3.8 mg/L (SAA) and 10.2 mg/dL (Hp). No significant interference and no significant deviation from linearity was found in the both comparison studies. In conclusion, NanoEnTek's FREND-SAA and Hp assays represent rapid, accurate and convenient means to quantify SAA and Hp in human serum on FREND system.

  16. PB-AM: An open-source, fully analytical linear poisson-boltzmann solver.

    PubMed

    Felberg, Lisa E; Brookes, David H; Yap, Eng-Hui; Jurrus, Elizabeth; Baker, Nathan A; Head-Gordon, Teresa

    2017-06-05

    We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized PB equation, for molecules represented as non-overlapping spherical cavities. The PB-AM software package includes the generation of outputs files appropriate for visualization using visual molecular dynamics, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmann Solver (APBS) software package to make it more accessible to a larger group of scientists, educators, and students that are more familiar with the APBS framework. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Accurate potentiometric determination of lipid membrane-water partition coefficients and apparent dissociation constants of ionizable drugs: electrostatic corrections.

    PubMed

    Elsayed, Mustafa M A; Vierl, Ulrich; Cevc, Gregor

    2009-06-01

    Potentiometric lipid membrane-water partition coefficient studies neglect electrostatic interactions to date; this leads to incorrect results. We herein show how to account properly for such interactions in potentiometric data analysis. We conducted potentiometric titration experiments to determine lipid membrane-water partition coefficients of four illustrative drugs, bupivacaine, diclofenac, ketoprofen and terbinafine. We then analyzed the results conventionally and with an improved analytical approach that considers Coulombic electrostatic interactions. The new analytical approach delivers robust partition coefficient values. In contrast, the conventional data analysis yields apparent partition coefficients of the ionized drug forms that depend on experimental conditions (mainly the lipid-drug ratio and the bulk ionic strength). This is due to changing electrostatic effects originating either from bound drug and/or lipid charges. A membrane comprising 10 mol-% mono-charged molecules in a 150 mM (monovalent) electrolyte solution yields results that differ by a factor of 4 from uncharged membranes results. Allowance for the Coulombic electrostatic interactions is a prerequisite for accurate and reliable determination of lipid membrane-water partition coefficients of ionizable drugs from potentiometric titration data. The same conclusion applies to all analytical methods involving drug binding to a surface.

  18. Upon the Shoulders of Giants: Open-Source Hardware and Software in Analytical Chemistry.

    PubMed

    Dryden, Michael D M; Fobel, Ryan; Fobel, Christian; Wheeler, Aaron R

    2017-04-18

    Isaac Newton famously observed that "if I have seen further it is by standing on the shoulders of giants." We propose that this sentiment is a powerful motivation for the "open-source" movement in scientific research, in which creators provide everything needed to replicate a given project online, as well as providing explicit permission for users to use, improve, and share it with others. Here, we write to introduce analytical chemists who are new to the open-source movement to best practices and concepts in this area and to survey the state of open-source research in analytical chemistry. We conclude by considering two examples of open-source projects from our own research group, with the hope that a description of the process, motivations, and results will provide a convincing argument about the benefits that this movement brings to both creators and users.

  19. Shock compression modeling of metallic single crystals: comparison of finite difference, steady wave, and analytical solutions

    DOE PAGES

    Lloyd, Jeffrey T.; Clayton, John D.; Austin, Ryan A.; ...

    2015-07-10

    Background: The shock response of metallic single crystals can be captured using a micro-mechanical description of the thermoelastic-viscoplastic material response; however, using a such a description within the context of traditional numerical methods may introduce a physical artifacts. Advantages and disadvantages of complex material descriptions, in particular the viscoplastic response, must be framed within approximations introduced by numerical methods. Methods: Three methods of modeling the shock response of metallic single crystals are summarized: finite difference simulations, steady wave simulations, and algebraic solutions of the Rankine-Hugoniot jump conditions. For the former two numerical techniques, a dislocation density based framework describes themore » rate- and temperature-dependent shear strength on each slip system. For the latter analytical technique, a simple (two-parameter) rate- and temperature-independent linear hardening description is necessarily invoked to enable simultaneous solution of the governing equations. For all models, the same nonlinear thermoelastic energy potential incorporating elastic constants of up to order 3 is applied. Results: Solutions are compared for plate impact of highly symmetric orientations (all three methods) and low symmetry orientations (numerical methods only) of aluminum single crystals shocked to 5 GPa (weak shock regime) and 25 GPa (overdriven regime). Conclusions: For weak shocks, results of the two numerical methods are very similar, regardless of crystallographic orientation. For strong shocks, artificial viscosity affects the finite difference solution, and effects of transverse waves for the lower symmetry orientations not captured by the steady wave method become important. The analytical solution, which can only be applied to highly symmetric orientations, provides reasonable accuracy with regards to prediction of most variables in the final shocked state but, by construction, does not provide

  20. Understanding Business Analytics

    DTIC Science & Technology

    2015-01-05

    analytics have been used in organizations for a variety of reasons for quite some time; ranging from the simple (generating and understanding business analytics...process. understanding business analytics 3 How well these two components are orchestrated will determine the level of success an organization has in

  1. Edge detection of magnetic anomalies using analytic signal of tilt angle (ASTA)

    NASA Astrophysics Data System (ADS)

    Alamdar, K.; Ansari, A. H.; Ghorbani, A.

    2009-04-01

    Magnetic is a commonly used geophysical technique to identify and image potential subsurface targets. Interpretation of magnetic anomalies is a complex process due to the superposition of multiple magnetic sources, presence of geologic and cultural noise and acquisition and positioning error. Both the vertical and horizontal derivatives of potential field data are useful; horizontal derivative, enhance edges whereas vertical derivative narrow the width of anomaly and so locate source bodies more accurately. We can combine vertical and horizontal derivative of magnetic field to achieve analytic signal which is independent to body magnetization direction and maximum value of this lies over edges of body directly. Tilt angle filter is phased-base filter and is defined as angle between vertical derivative and total horizontal derivative. Tilt angle value differ from +90 degree to -90 degree and its zero value lies over body edge. One of disadvantage of this filter is when encountering with deep sources the detected edge is blurred. For overcome this problem many authors introduced new filters such as total horizontal derivative of tilt angle or vertical derivative of tilt angle which Because of using high-order derivative in these filters results may be too noisy. If we combine analytic signal and tilt angle, a new filter termed (ASTA) is produced which its maximum value lies directly over body edge and is easer than tilt angle to delineate body edge and no complicity of tilt angle. In this work new filter has been demonstrated on magnetic data from an area in Sar- Cheshme region in Iran. This area is located in 55 degree longitude and 32 degree latitude and is a copper potential region. The main formation in this area is Andesith and Trachyandezite. Magnetic surveying was employed to separate the boundaries of Andezite and Trachyandezite from adjacent area. In this regard a variety of filters such as analytic signal, tilt angle and ASTA filter have been applied which

  2. Spacecraft formation control using analytical finite-duration approaches

    NASA Astrophysics Data System (ADS)

    Ben Larbi, Mohamed Khalil; Stoll, Enrico

    2018-03-01

    This paper derives a control concept for formation flight (FF) applications assuming circular reference orbits. The paper focuses on a general impulsive control concept for FF which is then extended to the more realistic case of non-impulsive thrust maneuvers. The control concept uses a description of the FF in relative orbital elements (ROE) instead of the classical Cartesian description since the ROE provide a direct insight into key aspects of the relative motion and are particularly suitable for relative orbit control purposes and collision avoidance analysis. Although Gauss' variational equations have been first derived to offer a mathematical tool for processing orbit perturbations, they are suitable for several different applications. If the perturbation acceleration is due to a control thrust, Gauss' variational equations show the effect of such a control thrust on the Keplerian orbital elements. Integrating the Gauss' variational equations offers a direct relation between velocity increments in the local vertical local horizontal frame and the subsequent change of Keplerian orbital elements. For proximity operations, these equations can be generalized from describing the motion of single spacecraft to the description of the relative motion of two spacecraft. This will be shown for impulsive and finite-duration maneuvers. Based on that, an analytical tool to estimate the error induced through impulsive maneuver planning is presented. The resulting control schemes are simple and effective and thus also suitable for on-board implementation. Simulations show that the proposed concept improves the timing of the thrust maneuver executions and thus reduces the residual error of the formation control.

  3. Analyticity without Differentiability

    ERIC Educational Resources Information Center

    Kirillova, Evgenia; Spindler, Karlheinz

    2008-01-01

    In this article we derive all salient properties of analytic functions, including the analytic version of the inverse function theorem, using only the most elementary convergence properties of series. Not even the notion of differentiability is required to do so. Instead, analytical arguments are replaced by combinatorial arguments exhibiting…

  4. Method for accurate determination of dissociation constants of optical ratiometric systems: chemical probes, genetically encoded sensors, and interacting molecules.

    PubMed

    Pomorski, Adam; Kochańczyk, Tomasz; Miłoch, Anna; Krężel, Artur

    2013-12-03

    Ratiometric chemical probes and genetically encoded sensors are of high interest for both analytical chemists and molecular biologists. Their high sensitivity toward the target ligand and ability to obtain quantitative results without a known sensor concentration have made them a very useful tool in both in vitro and in vivo assays. Although ratiometric sensors are widely used in many applications, their successful and accurate usage depends on how they are characterized in terms of sensing target molecules. The most important feature of probes and sensors besides their optical parameters is an affinity constant toward analyzed molecules. The literature shows that different analytical approaches are used to determine the stability constants, with the ratio approach being most popular. However, oversimplification and lack of attention to detail results in inaccurate determination of stability constants, which in turn affects the results obtained using these sensors. Here, we present a new method where ratio signal is calibrated for borderline values of intensities of both wavelengths, instead of borderline ratio values that generate errors in many studies. At the same time, the equation takes into account the cooperativity factor or fluorescence artifacts and therefore can be used to characterize systems with various stoichiometries and experimental conditions. Accurate determination of stability constants is demonstrated utilizing four known optical ratiometric probes and sensors, together with a discussion regarding other, currently used methods.

  5. An accurate boundary element method for the exterior elastic scattering problem in two dimensions

    NASA Astrophysics Data System (ADS)

    Bao, Gang; Xu, Liwei; Yin, Tao

    2017-11-01

    This paper is concerned with a Galerkin boundary element method solving the two dimensional exterior elastic wave scattering problem. The original problem is first reduced to the so-called Burton-Miller [1] boundary integral formulation, and essential mathematical features of its variational form are discussed. In numerical implementations, a newly-derived and analytically accurate regularization formula [2] is employed for the numerical evaluation of hyper-singular boundary integral operator. A new computational approach is employed based on the series expansions of Hankel functions for the computation of weakly-singular boundary integral operators during the reduction of corresponding Galerkin equations into a discrete linear system. The effectiveness of proposed numerical methods is demonstrated using several numerical examples.

  6. Usefulness of Analytical Research: Rethinking Analytical R&D&T Strategies.

    PubMed

    Valcárcel, Miguel

    2017-11-07

    This Perspective is intended to help foster true innovation in Research & Development & Transfer (R&D&T) in Analytical Chemistry in the form of advances that are primarily useful for analytical purposes rather than solely for publishing. Devising effective means to strengthen the crucial contribution of Analytical Chemistry to progress in Chemistry, Science & Technology, and Society requires carefully examining the present status of our discipline and also identifying internal and external driving forces with a potential adverse impact on its development. The diagnostic process should be followed by administration of an effective therapy and supported by adoption of a theragnostic strategy if Analytical Chemistry is to enjoy a better future.

  7. Implicit/explicit memory versus analytic/nonanalytic processing: rethinking the mere exposure effect.

    PubMed

    Whittlesea, B W; Price, J R

    2001-03-01

    In studies of the mere exposure effect, rapid presentation of items can increase liking without accurate recognition. The effect on liking has been explained as a misattribution of fluency caused by prior presentation. However, fluency is also a source of feelings of familiarity. It is, therefore, surprising that prior experience can enhance liking without also causing familiarity-based recognition. We suggest that when study opportunities are minimal and test items are perceptually similar, people adopt an analytic approach, attempting to recognize distinctive features. That strategy fails because rapid presentation prevents effective encoding of such features; it also prevents people from experiencing fluency and a consequent feeling of familiarity. We suggest that the liking-without-recognition effect results from using an effective (nonanalytic) strategy in judging pleasantness, but an ineffective (analytic) strategy in recognition. Explanations of the mere exposure effect based on a distinction between implicit and explicit memory are unnecessary.

  8. Analytical characterization of wine and its precursors by capillary electrophoresis.

    PubMed

    Gomez, Federico J V; Monasterio, Romina P; Vargas, Verónica Carolina Soto; Silva, María F

    2012-08-01

    The accurate determination of marker chemical species in grape, musts, and wines presents a unique analytical challenge with high impact on diverse areas of knowledge such as health, plant physiology, and economy. Capillary electromigration techniques have emerged as a powerful tool, allowing the separation and identification of highly polar compounds that cannot be easily separated by traditional HPLC methods, providing complementary information and permitting the simultaneous analysis of analytes with different nature in a single run. The main advantage of CE over traditional methods for wine analysis is that in most cases samples require no treatment other than filtration. The purpose of this article is to present a revision on capillary electromigration methods applied to the analysis of wine and its precursors over the last decade. The current state of the art of the topic is evaluated, with special emphasis on the natural compounds that have allowed wine to be considered as a functional food. The most representative revised compounds are phenolic compounds, amino acids, proteins, elemental species, mycotoxins, and organic acids. Finally, a discussion on future trends of the role of capillary electrophoresis in the field of analytical characterization of wines for routine analysis, wine classification, as well as multidisciplinary aspects of the so-called "from soil to glass" chain is presented. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Analytical method for thermal stress analysis of plasma facing materials

    NASA Astrophysics Data System (ADS)

    You, J. H.; Bolt, H.

    2001-10-01

    The thermo-mechanical response of plasma facing materials (PFMs) to heat loads from the fusion plasma is one of the crucial issues in fusion technology. In this work, a fully analytical description of the thermal stress distribution in armour tiles of plasma facing components is presented which is expected to occur under typical high heat flux (HHF) loads. The method of stress superposition is applied considering the temperature gradient and thermal expansion mismatch. Several combinations of PFMs and heat sink metals are analysed and compared. In the framework of the present theoretical model, plastic flow and the effect of residual stress can be quantitatively assessed. Possible failure features are discussed.

  10. A singularity free analytical solution of artificial satellite motion with drag

    NASA Technical Reports Server (NTRS)

    Scheifele, G.; Mueller, A. C.; Starke, S. E.

    1977-01-01

    The connection between the existing Delaunay-Similar and Poincare-Similar satellite theories in the true anomaly version is outlined for the J(2) perturbation and the new drag approach. An overall description of the concept of the approach is given while the necessary expansions and the procedure to arrive at the computer program for the canonical forces is delineated. The procedure for the analytical integration of these developed equations is described. In addition, some numerical results are given. The computer program for the algebraic multiplication of the Fourier series which creates the FORTRAN coding in an automatic manner is described and documented.

  11. Heparin removal by ecteola-cellulose pre-treatment enables the use of plasma samples for accurate measurement of anti-Yellow fever virus neutralizing antibodies.

    PubMed

    Campi-Azevedo, Ana Carolina; Peruhype-Magalhães, Vanessa; Coelho-Dos-Reis, Jordana Grazziela; Costa-Pereira, Christiane; Yamamura, Anna Yoshida; Lima, Sheila Maria Barbosa de; Simões, Marisol; Campos, Fernanda Magalhães Freire; de Castro Zacche Tonini, Aline; Lemos, Elenice Moreira; Brum, Ricardo Cristiano; de Noronha, Tatiana Guimarães; Freire, Marcos Silva; Maia, Maria de Lourdes Sousa; Camacho, Luiz Antônio Bastos; Rios, Maria; Chancey, Caren; Romano, Alessandro; Domingues, Carla Magda; Teixeira-Carvalho, Andréa; Martins-Filho, Olindo Assis

    2017-09-01

    Technological innovations in vaccinology have recently contributed to bring about novel insights for the vaccine-induced immune response. While the current protocols that use peripheral blood samples may provide abundant data, a range of distinct components of whole blood samples are required and the different anticoagulant systems employed may impair some properties of the biological sample and interfere with functional assays. Although the interference of heparin in functional assays for viral neutralizing antibodies such as the functional plaque-reduction neutralization test (PRNT), considered the gold-standard method to assess and monitor the protective immunity induced by the Yellow fever virus (YFV) vaccine, has been well characterized, the development of pre-analytical treatments is still required for the establishment of optimized protocols. The present study intended to optimize and evaluate the performance of pre-analytical treatment of heparin-collected blood samples with ecteola-cellulose (ECT) to provide accurate measurement of anti-YFV neutralizing antibodies, by PRNT. The study was designed in three steps, including: I. Problem statement; II. Pre-analytical steps; III. Analytical steps. Data confirmed the interference of heparin on PRNT reactivity in a dose-responsive fashion. Distinct sets of conditions for ECT pre-treatment were tested to optimize the heparin removal. The optimized protocol was pre-validated to determine the effectiveness of heparin plasma:ECT treatment to restore the PRNT titers as compared to serum samples. The validation and comparative performance was carried out by using a large range of serum vs heparin plasma:ECT 1:2 paired samples obtained from unvaccinated and 17DD-YFV primary vaccinated subjects. Altogether, the findings support the use of heparin plasma:ECT samples for accurate measurement of anti-YFV neutralizing antibodies. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Analytical performances of the Diazyme ADA assay on the Cobas® 6000 system.

    PubMed

    Delacour, Hervé; Sauvanet, Christophe; Ceppa, Franck; Burnat, Pascal

    2010-12-01

    To evaluate the analytical performance of the Diazyme ADA assay on the Cobas® 6000 system for pleural fluid samples analysis. Imprecision, linearity, calibration curve stability, interference, and correlation studies were completed. The Diazyme ADA assay demonstrated excellent precision (CV<4%) over the analytical measurement range (0.5-117 U/L). Bilirubin above 50 μmol/L and haemoglobin above 177 μmol/L interfered with the test, inducing a negative and a positive interference respectively. The Diazyme ADA assay correlated well with the Giusti method (r(2)=0.93) but exhibited a negative bias (~ -30%). The Diazyme ADA assay on the Cobas® 6000 system represents a rapid, accurate, precise and reliable method for determination of ADA activity in pleural fluid samples. Copyright © 2010 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  13. Semiclassical description of resonance-assisted tunneling in one-dimensional integrable models

    NASA Astrophysics Data System (ADS)

    Le Deunff, Jérémy; Mouchet, Amaury; Schlagheck, Peter

    2013-10-01

    Resonance-assisted tunneling is investigated within the framework of one-dimensional integrable systems. We present a systematic recipe, based on Hamiltonian normal forms, to construct one-dimensional integrable models that exhibit resonance island chain structures with accurately controlled sizes and positions of the islands. Using complex classical trajectories that evolve along suitably defined paths in the complex time domain, we construct a semiclassical theory of the resonance-assisted tunneling process. This semiclassical approach yields a compact analytical expression for tunnelling-induced level splittings which is found to be in very good agreement with the exact splittings obtained through numerical diagonalization.

  14. Numeric promoter description - A comparative view on concepts and general application.

    PubMed

    Beier, Rico; Labudde, Dirk

    2016-01-01

    Nucleic acid molecules play a key role in a variety of biological processes. Starting from storage and transfer tasks, this also comprises the triggering of biological processes, regulatory effects and the active influence gained by target binding. Based on the experimental output (in this case promoter sequences), further in silico analyses aid in gaining new insights into these processes and interactions. The numerical description of nucleic acids thereby constitutes a bridge between the concrete biological issues and the analytical methods. Hence, this study compares 26 descriptor sets obtained by applying well-known numerical description concepts to an established dataset of 38 DNA promoter sequences. The suitability of the description sets was evaluated by computing partial least squares regression models and assessing the model accuracy. We conclude that the major importance regarding the descriptive power is attached to positional information rather than to explicitly incorporated physico-chemical information, since a sufficient amount of implicit physico-chemical information is already encoded in the nucleobase classification. The regression models especially benefited from employing the information that is encoded in the sequential and structural neighborhood of the nucleobases. Thus, the analyses of n-grams (short fragments of length n) suggested that they are valuable descriptors for DNA target interactions. A mixed n-gram descriptor set thereby yielded the best description of the promoter sequences. The corresponding regression model was checked and found to be plausible as it was able to reproduce the characteristic binding motifs of promoter sequences in a reasonable degree. As most functional nucleic acids are based on the principle of molecular recognition, the findings are not restricted to promoter sequences, but can rather be transferred to other kinds of functional nucleic acids. Thus, the concepts presented in this study could provide

  15. Population description and its role in the interpretation of genetic association

    PubMed Central

    Yu, Joon-Ho; Crouch, Julia; Fryer-Edwards, Kelly; Burke, Wylie

    2010-01-01

    Despite calls for greater clarity and precision of population description, studies have documented persistent ambiguity in the use of race/ethnicity terms in genetic research. It is unclear why investigators tolerate such ambiguity, or what effect these practices have on the evaluation of reported associations. To explore the way that population description is used to replicate and/or extend previously reported genetic observations, we examined articles describing the association of the peroxisome proliferator-activated receptor-gamma-γ Pro12Ala polymorphism with type 2 diabetes mellitus and related phenotypes, published between 1997 and 2005. The 80 articles identified were subjected to a detailed content analysis to determine (1) how sampled populations were described, (2) whether and how the choice of sample was explained, and (3) how the allele frequency and genetic association findings identified were contextualized and interpreted. In common with previous reports, we observed a variety of sample descriptions and little explanation for the choice of population investigated. Samples of European origin were typically described with greater specificity than samples of other origin. However, findings from European samples were nearly always compared to samples described as “Caucasian” and sometimes generalized to all Caucasians or to all humans. These findings suggest that care with population description, while important, may not fully address analytical concerns regarding the interpretation of variable study outcomes or ethical concerns regarding the attribution of genetic observations to broad social groups. Instead, criteria which help investigators better distinguish justified and unjustified forms of population generalization may be required. PMID:20157827

  16. Interpretable Decision Sets: A Joint Framework for Description and Prediction

    PubMed Central

    Lakkaraju, Himabindu; Bach, Stephen H.; Jure, Leskovec

    2016-01-01

    One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model’s prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency. PMID:27853627

  17. Child-Langmuir law applicability for a cathode sheath description of glow discharge in hydrogen

    NASA Astrophysics Data System (ADS)

    Lisovskiy, V. A.; Artushenko, K. P.; Yegorenkov, V. D.

    2016-08-01

    The present paper reveals that the Child-Langmuir law version with the constant ion mobility has to be applied for the cathode sheath description of the glow discharge in hydrogen. Using the analytical model we demonstrate that even in a high electric field the constant mobility law version rather than that for the constant ion mean free path has to hold in the case of impeded charge exchange and the dominant effect of polarization forces on the ion motion through the cathode sheath.

  18. Accurate Arabic Script Language/Dialect Classification

    DTIC Science & Technology

    2014-01-01

    Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification

  19. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan.

    PubMed

    Najat, Dereen

    2017-01-01

    hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results.

  20. Analytical Characterisation of Nanoscale Zero-Valent Iron: A ...

    EPA Pesticide Factsheets

    Zero-valent iron nanoparticles (nZVI) have been widely tested as they are showing significant promise for environmental remediation. However, many recent studies have demonstrated that their mobility and reactivity in subsurface environments are significantly affected by their tendency to aggregate. Both the mobility and reactivity of nZVI mainly depends on properties such as particle size, surface chemistry and bulk composition. In order to ensure efficient remediation, it is crucial to accurately assess and understand the implications of these properties before deploying these materials into contaminated environments. Many analytical techniques are now available to determine these parameters and this paper provides a critical review of their usefulness and limitations for nZVI characterisation. These analytical techniques include microscopy and light scattering techniques for the determination of particle size, size distribution and aggregation state, and X-ray techniques for the characterisation of surface chemistry and bulk composition. Example characterisation data derived from commercial nZVI materials is used to further illustrate method strengths and limitations. Finally, some important challenges with respect to the characterisation of nZVI in groundwater samples are discussed. In recent years, manufactured nanoparticles (MNPs) have attracted increasing interest for their potential applications in the treatment of contaminated soil and water. In compar

  1. Proactive Supply Chain Performance Management with Predictive Analytics

    PubMed Central

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605

  2. Proactive supply chain performance management with predictive analytics.

    PubMed

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.

  3. A Behavior Analytic Interpretation of Alexithymia

    PubMed Central

    Darrow, Sabrina M.; Follette, William C.

    2014-01-01

    Alexithymia is a term used to describe individuals who seem unable to experience or at least describe emotions. This paper offers a theoretical interpretation of alexithymia from a radical behaviorist perspective. While there have been attempts to explain the etiology of alexithymia, the current analysis is unique in that it provides direct treatment implications. The pragmatic analysis described focuses on the verbal behavior of individuals rather than looking “inside” for explanations. This is supported by a review of experimental research that has failed to find consistencies among alexithymic individuals’ physiological responding. Descriptions of the various discriminative and consequential stimulus conditions involved in the complex learning histories of individuals that could result in an alexithymic presentation are provided. This analysis helps situate the alexithymia construct in a broader behavior analytic understanding of emotions. Finally this paper outlines implications for assessment and treatment, which involve influencing discriminative and consequential interpersonal stimulus conditions to shape verbal behavior about emotions. PMID:25473602

  4. A complete analytical solution of the Fokker-Planck and balance equations for nucleation and growth of crystals

    NASA Astrophysics Data System (ADS)

    Makoveeva, Eugenya V.; Alexandrov, Dmitri V.

    2018-01-01

    This article is concerned with a new analytical description of nucleation and growth of crystals in a metastable mushy layer (supercooled liquid or supersaturated solution) at the intermediate stage of phase transition. The model under consideration consisting of the non-stationary integro-differential system of governing equations for the distribution function and metastability level is analytically solved by means of the saddle-point technique for the Laplace-type integral in the case of arbitrary nucleation kinetics and time-dependent heat or mass sources in the balance equation. We demonstrate that the time-dependent distribution function approaches the stationary profile in course of time. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.

  5. Hybrid experimental/analytical models of structural dynamics - Creation and use for predictions

    NASA Technical Reports Server (NTRS)

    Balmes, Etienne

    1993-01-01

    An original complete methodology for the construction of predictive models of damped structural vibrations is introduced. A consistent definition of normal and complex modes is given which leads to an original method to accurately identify non-proportionally damped normal mode models. A new method to create predictive hybrid experimental/analytical models of damped structures is introduced, and the ability of hybrid models to predict the response to system configuration changes is discussed. Finally a critical review of the overall methodology is made by application to the case of the MIT/SERC interferometer testbed.

  6. A results-based process for evaluation of diverse visual analytics tools

    NASA Astrophysics Data System (ADS)

    Rubin, Gary; Berger, David H.

    2013-05-01

    With the pervasiveness of still and full-motion imagery in commercial and military applications, the need to ingest and analyze these media has grown rapidly in recent years. Additionally, video hosting and live camera websites provide a near real-time view of our changing world with unprecedented spatial coverage. To take advantage of these controlled and crowd-sourced opportunities, sophisticated visual analytics (VA) tools are required to accurately and efficiently convert raw imagery into usable information. Whether investing in VA products or evaluating algorithms for potential development, it is important for stakeholders to understand the capabilities and limitations of visual analytics tools. Visual analytics algorithms are being applied to problems related to Intelligence, Surveillance, and Reconnaissance (ISR), facility security, and public safety monitoring, to name a few. The diversity of requirements means that a onesize- fits-all approach to performance assessment will not work. We present a process for evaluating the efficacy of algorithms in real-world conditions, thereby allowing users and developers of video analytics software to understand software capabilities and identify potential shortcomings. The results-based approach described in this paper uses an analysis of end-user requirements and Concept of Operations (CONOPS) to define Measures of Effectiveness (MOEs), test data requirements, and evaluation strategies. We define metrics that individually do not fully characterize a system, but when used together, are a powerful way to reveal both strengths and weaknesses. We provide examples of data products, such as heatmaps, performance maps, detection timelines, and rank-based probability-of-detection curves.

  7. General Solution of the Rayleigh Equation for the Description of Bubble Oscillations Near a Wall

    NASA Astrophysics Data System (ADS)

    Garashchuk, Ivan; Sinelshchikov, Dmitry; Kudryashov, Nikolay

    2018-02-01

    We consider a generalization of the Rayleigh equation for the description of the dynamics of a spherical gas bubble oscillating near an elastic or rigid wall. We show that in the non-dissipative case, i.e. neglecting the liquid viscosity and compressibility, it is possible to construct the general analytical solution of this equation. The corresponding general solution is expressed via the Weierstrass elliptic function. We analyze the dependence of this solution properties on the physical parameters.

  8. Accurate wavelengths for X-ray spectroscopy and the NIST hydrogen-like ion database

    NASA Astrophysics Data System (ADS)

    Kotochigova, S. A.; Kirby, K. P.; Brickhouse, N. S.; Mohr, P. J.; Tupitsyn, I. I.

    2005-06-01

    We have developed an ab initio multi-configuration Dirac-Fock-Sturm method for the precise calculation of X-ray emission spectra, including energies, transition wavelengths and transition probabilities. The calculations are based on non-orthogonal basis sets, generated by solving the Dirac-Fock and Dirac-Fock-Sturm equations. Inclusion of Sturm functions into the basis set provides an efficient description of correlation effects in highly charged ions and fast convergence of the configuration interaction procedure. A second part of our study is devoted to developing a theoretical procedure and creating an interactive database to generate energies and transition frequencies for hydrogen-like ions. This procedure is highly accurate and based on current knowledge of the relevant theory, which includes relativistic, quantum electrodynamic, recoil, and nuclear size effects.

  9. Investigation of the "true" extraction recovery of analytes from multiple types of tissues and its impact on tissue bioanalysis using two model compounds.

    PubMed

    Yuan, Long; Ma, Li; Dillon, Lisa; Fancher, R Marcus; Sun, Huadong; Zhu, Mingshe; Lehman-McKeeman, Lois; Aubry, Anne-Françoise; Ji, Qin C

    2016-11-16

    LC-MS/MS has been widely applied to the quantitative analysis of tissue samples. However, one key remaining issue is that the extraction recovery of analyte from spiked tissue calibration standard and quality control samples (QCs) may not accurately represent the "true" recovery of analyte from incurred tissue samples. This may affect the accuracy of LC-MS/MS tissue bioanalysis. Here, we investigated whether the recovery determined using tissue QCs by LC-MS/MS can accurately represent the "true" recovery from incurred tissue samples using two model compounds: BMS-986104, a S1P 1 receptor modulator drug candidate, and its phosphate metabolite, BMS-986104-P. We first developed a novel acid and surfactant assisted protein precipitation method for the extraction of BMS-986104 and BMS-986104-P from rat tissues, and determined their recoveries using tissue QCs by LC-MS/MS. We then used radioactive incurred samples from rats dosed with 3 H-labeled BMS-986104 to determine the absolute total radioactivity recovery in six different tissues. The recoveries determined using tissue QCs and incurred samples matched with each other very well. The results demonstrated that, in this assay, tissue QCs accurately represented the incurred tissue samples to determine the "true" recovery, and LC-MS/MS assay was accurate for tissue bioanalysis. Another aspect we investigated is how the tissue QCs should be prepared to better represent the incurred tissue samples. We compared two different QC preparation methods (analyte spiked in tissue homogenates or in intact tissues) and demonstrated that the two methods had no significant difference when a good sample preparation was in place. The developed assay showed excellent accuracy and precision, and was successfully applied to the quantitative determination of BMS-986104 and BMS-986104-P in tissues in a rat toxicology study. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Lung Sliding Identification Is Less Accurate in the Left Hemithorax.

    PubMed

    Piette, Eric; Daoust, Raoul; Lambert, Jean; Denault, André

    2017-02-01

    The aim of our study was to compare the accuracy of lung sliding identification for the left and right hemithoraxes, using prerecorded short US sequences, in a group of physicians with mixed clinical and US training. A total of 140 US sequences of a complete respiratory cycle were recorded in the operating room. Each sequence was divided in two, yielding 140 sequences of present lung sliding and 140 sequences of absent lung sliding. Of these 280 sequences, 40 were randomly repeated to assess intraobserver variability, for a total of 320 sequences. Descriptive data, the mean accuracy of each participant, as well as the rate of correct answers for each of the original 280 sequences were tabulated and compared for different subgroups of clinical and US training. A video with examples of present and absent lung sliding and a lung pulse was shown before testing. Two sessions were planned to facilitate the participation of 75 clinicians. In the first group, the rate of accurate lung sliding identification was lower in the left hemithorax than in the right (67.0% [interquartile range (IQR), 43.0-83.0] versus 80.0% [IQR, 57.0-95.0]; P < .001). In the second group, the rate of accurate lung sliding identification was also lower in the left hemithorax than in the right (76.3% [IQR, 42.9-90.9] versus 88.7% [IQR, 63.1-96.9]; P = .001). Mean accuracy rates were 67.5% (95% confidence interval, 65.7-69.4) in the first group and 73.1% (95% confidence interval, 70.7-75.5) in the second (P < .001). Lung sliding identification seems less accurate in the left hemithorax when using a short US examination. This study was done on recorded US sequences and should be repeated in a live clinical situation to confirm our results. © 2016 by the American Institute of Ultrasound in Medicine.

  11. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  12. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  13. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  14. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  15. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  16. An analytical coarse-graining method which preserves the free energy, structural correlations, and thermodynamic state of polymer melts from the atomistic to the mesoscale.

    PubMed

    McCarty, J; Clark, A J; Copperman, J; Guenza, M G

    2014-05-28

    Structural and thermodynamic consistency of coarse-graining models across multiple length scales is essential for the predictive role of multi-scale modeling and molecular dynamic simulations that use mesoscale descriptions. Our approach is a coarse-grained model based on integral equation theory, which can represent polymer chains at variable levels of chemical details. The model is analytical and depends on molecular and thermodynamic parameters of the system under study, as well as on the direct correlation function in the k → 0 limit, c0. A numerical solution to the PRISM integral equations is used to determine c0, by adjusting the value of the effective hard sphere diameter, dHS, to agree with the predicted equation of state. This single quantity parameterizes the coarse-grained potential, which is used to perform mesoscale simulations that are directly compared with atomistic-level simulations of the same system. We test our coarse-graining formalism by comparing structural correlations, isothermal compressibility, equation of state, Helmholtz and Gibbs free energies, and potential energy and entropy using both united atom and coarse-grained descriptions. We find quantitative agreement between the analytical formalism for the thermodynamic properties, and the results of Molecular Dynamics simulations, independent of the chosen level of representation. In the mesoscale description, the potential energy of the soft-particle interaction becomes a free energy in the coarse-grained coordinates which preserves the excess free energy from an ideal gas across all levels of description. The structural consistency between the united-atom and mesoscale descriptions means the relative entropy between descriptions has been minimized without any variational optimization parameters. The approach is general and applicable to any polymeric system in different thermodynamic conditions.

  17. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  18. Post-analytical stability of 23 common chemistry and immunochemistry analytes in incurred samples.

    PubMed

    Nielsen, Betina Klint; Frederiksen, Tina; Friis-Hansen, Lennart; Larsen, Pia Bükmann

    2017-12-01

    Storage of blood samples after centrifugation, decapping and initial sampling allows ordering of additional blood tests. The pre-analytic stability of biochemistry and immunochemistry analytes has been studied in detail, but little is known about the post-analytical stability in incurred samples. We examined the stability of 23 routine analytes on the Dimension Vista® (Siemens Healthineers, Denmark): 42-60 routine samples in lithium-heparin gel tubes (Vacutainer, BD, USA) were centrifuged at 3000×g for 10min. Immediately after centrifugation, initial concentration of analytes were measured in duplicate (t=0). The tubes were stored decapped at room temperature and re-analyzed after 2, 4, 6, 8 and 10h in singletons. The concentration from reanalysis were normalized to initial concentration (t=0). Internal acceptance criteria for bias and total error were used to determine stability of each analyte. Additionally, evaporation from the decapped blood collection tubes and the residual platelet count in the plasma after centrifugation were quantified. We report a post-analytical stability of most routine analytes of ≥8h and do therefore - with few exceptions - suggest a standard 8hour-time limit for reordering and reanalysis of analytes in incurred samples. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  19. A Semi-Analytical Solution to Time Dependent Groundwater Flow Equation Incorporating Stream-Wetland-Aquifer Interactions

    NASA Astrophysics Data System (ADS)

    Boyraz, Uǧur; Melek Kazezyılmaz-Alhan, Cevza

    2017-04-01

    Groundwater is a vital element of hydrologic cycle and the analytical & numerical solutions of different forms of groundwater flow equations play an important role in understanding the hydrological behavior of subsurface water. The interaction between groundwater and surface water bodies can be determined using these solutions. In this study, new hypothetical approaches are implemented to groundwater flow system in order to contribute to the studies on surface water/groundwater interactions. A time dependent problem is considered in a 2-dimensional stream-wetland-aquifer system. The sloped stream boundary is used to represent the interaction between stream and aquifer. The rest of the aquifer boundaries are assumed as no-flux boundary. In addition, a wetland is considered as a surface water body which lies over the whole aquifer. The effect of the interaction between the wetland and the aquifer is taken into account with a source/sink term in the groundwater flow equation and the interaction flow is calculated by using Darcy's approach. A semi-analytical solution is developed for the 2-dimensional groundwater flow equation in 5 steps. First, Laplace and Fourier cosine transforms are employed to obtain the general solution in Fourier and Laplace domain. Then, the initial and boundary conditions are applied to obtain the particular solution. Finally, inverse Fourier transform is carried out analytically and inverse Laplace transform is carried out numerically to obtain the final solution in space and time domain, respectively. In order to verify the semi-analytical solution, an explicit finite difference algorithm is developed and analytical and numerical solutions are compared for synthetic examples. The comparison of the analytical and numerical solutions shows that the analytical solution gives accurate results.

  20. Chromatic refraction with global ozone monitoring by occultation of stars. I. Description and scintillation correction.

    PubMed

    Dalaudier, F; Kan, V; Gurvich, A S

    2001-02-20

    We describe refractive and chromatic effects, both regular and random, that occur during star occultations by the Earth's atmosphere. The scintillation that results from random density fluctuations, as well as the consequences of regular chromatic refraction, is qualitatively described. The resultant chromatic scintillation will produce random features on the Global Ozone Monitoring by Occultation of Stars (GOMOS) spectrometer, with an amplitude comparable with that of some of the real absorbing features that result from atmospheric constituents. A correction method that is based on the use of fast photometer signals is described, and its efficiency is discussed. We give a qualitative (although accurate) description of the phenomena, including numerical values when needed. Geometrical optics and the phase-screen approximation are used to keep the description simple.

  1. Analytical expressions for the nonlinear interference in dispersion managed transmission coherent optical systems

    NASA Astrophysics Data System (ADS)

    Qiao, Yaojun; Li, Ming; Yang, Qiuhong; Xu, Yanfei; Ji, Yuefeng

    2015-01-01

    Closed-form expressions of nonlinear interference of dense wavelength-division-multiplexed (WDM) systems with dispersion managed transmission (DMT) are derived. We carry out a simulative validation by addressing an ample and significant set of the Nyquist-WDM systems based on polarization multiplexed quadrature phase-shift keying (PM-QPSK) subcarriers at a baud rate of 32 Gbaud per channel. Simulation results show the simple closed-form analytical expressions can provide an effective tool for the quick and accurate prediction of system performance in DMT coherent optical systems.

  2. Compensation method for obtaining accurate, sub-micrometer displacement measurements of immersed specimens using electronic speckle interferometry.

    PubMed

    Fazio, Massimo A; Bruno, Luigi; Reynaud, Juan F; Poggialini, Andrea; Downs, J Crawford

    2012-03-01

    We proposed and validated a compensation method that accounts for the optical distortion inherent in measuring displacements on specimens immersed in aqueous solution. A spherically-shaped rubber specimen was mounted and pressurized on a custom apparatus, with the resulting surface displacements recorded using electronic speckle pattern interferometry (ESPI). Point-to-point light direction computation is achieved by a ray-tracing strategy coupled with customized B-spline-based analytical representation of the specimen shape. The compensation method reduced the mean magnitude of the displacement error induced by the optical distortion from 35% to 3%, and ESPI displacement measurement repeatability showed a mean variance of 16 nm at the 95% confidence level for immersed specimens. The ESPI interferometer and numerical data analysis procedure presented herein provide reliable, accurate, and repeatable measurement of sub-micrometer deformations obtained from pressurization tests of spherically-shaped specimens immersed in aqueous salt solution. This method can be used to quantify small deformations in biological tissue samples under load, while maintaining the hydration necessary to ensure accurate material property assessment.

  3. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    NASA Astrophysics Data System (ADS)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6

  4. Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions

    NASA Astrophysics Data System (ADS)

    Chen, N.; Majda, A.

    2017-12-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to

  5. Clustering in analytical chemistry.

    PubMed

    Drab, Klaudia; Daszykowski, Michal

    2014-01-01

    Data clustering plays an important role in the exploratory analysis of analytical data, and the use of clustering methods has been acknowledged in different fields of science. In this paper, principles of data clustering are presented with a direct focus on clustering of analytical data. The role of the clustering process in the analytical workflow is underlined, and its potential impact on the analytical workflow is emphasized.

  6. Analytical description of high-aperture STED resolution with 0–2π vortex phase modulation

    PubMed Central

    Xie, Hao; Liu, Yujia; Jin, Dayong; Santangelo, Philip J.; Xi, Peng

    2014-01-01

    Stimulated emission depletion (STED) can achieve optical superresolution, with the optical diffraction limit broken by the suppression on the periphery of the fluorescent focal spot. Previously, it is generally experimentally accepted that there exists an inverse square root relationship with the STED power and the resolution, but with arbitrary coefficients in expression. In this paper, we have removed the arbitrary coefficients by exploring the relationship between the STED power and the achievable resolution from vector optical theory for the widely used 0–2π vortex phase modulation. Electromagnetic fields of the focal region of a high numerical aperture objective are calculated and approximated into polynomials of radius in the focal plane, and analytical expression of resolution as a function of the STED intensity has been derived. As a result, the resolution can be estimated directly from the measurement of the saturation power of the dye and the STED power applied in the region of high STED power. PMID:24323224

  7. Analytical Ferrography Standardization.

    DTIC Science & Technology

    1982-01-01

    AD-AII6 508 MECHANICAL TECHNOLOGY INC LATHAM NY RESEARCH AND 0EV--ETC F/6 7/4 ANALYTICAL FERROGRAPHY STANDARDIZATION. (U) JAN 82 P A SENHOLZI, A S...ii Mwl jutio7 Unimte SMechanical Technology Incorporated Research and Development Division ReerhadDvlpetDvso I FINAL REPORT ANALYTICAL FERROGRAPHY ...Final Report MTI Technical Report No. 82TRS6 ANALYTICAL FERROGRAPHY STANDARDIZATION P. B. Senholzi A. S. Maciejewski Applications Engineering Mechanical

  8. Expressions of homosexuality and the perspective of analytical psychology.

    PubMed

    Miller, Barry

    2010-02-01

    Homosexuality, as a description and category of human experience, has a long, complicated and problematic history. It has been utilized as a carrier of theological, political, and psychological ideologies of all sorts, with varying and contradictory influences into the lives of us all. Analytical psychology, emphasizing the purposiveness found in manifestations of the psyche, offers a unique approach to this subject. The focus moves from causation to the meanings embedded in erotic expressions, fantasies, and dreams. Consequently, homosexuality loses its predetermined meaning and finds its definition in the psychology of the individual. Categories of 'sexual orientation' may defend against personal analysis, deflecting the essential fluidity and mystery of Eros. This is illustrated with samples of the variety found in 'homosexual' material.

  9. Influence of Analyte Concentration on Stability Constant Values Determined by Capillary Electrophoresis.

    PubMed

    Sursyakova, Viktoria V; Burmakina, Galina V; Rubaylo, Anatoly I

    2016-08-01

    The influence of analyte concentration when compared with the concentration of a charged ligand in background electrolyte (BGE) on the measured values of electrophoretic mobilities and stability constants (association, binding or formation constants) is studied using capillary electrophoresis (CE) and a dynamic mathematical simulator of CE. The study is performed using labile complexes (with fast kinetics) of iron (III) and 5-sulfosalicylate ions (ISC) as an example. It is shown that because the ligand concentration in the analyte zone is not equal to that in BGE, considerable changes in the migration times and electrophoretic mobilities are observed, resulting in systematic errors in the stability constant values. Of crucial significance is the slope of the dependence of the electrophoretic mobility decrease on the ligand equilibrium concentration. Without prior information on this dependence to accurately evaluate the stability constants for similar systems, the total ligand concentration must be at least >50-100 times higher than the total concentration of analyte. Experimental ISC peak fronting and the difference between the direction of the experimental pH dependence of the electrophoretic mobility decrease and the mathematical simulation allow assuming the presence of capillary wall interaction. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. An analytic, approximate method for modeling steady, three-dimensional flow to partially penetrating wells

    NASA Astrophysics Data System (ADS)

    Bakker, Mark

    2001-05-01

    An analytic, approximate solution is derived for the modeling of three-dimensional flow to partially penetrating wells. The solution is written in terms of a correction on the solution for a fully penetrating well and is obtained by dividing the aquifer up, locally, in a number of aquifer layers. The resulting system of differential equations is solved by application of the theory for multiaquifer flow. The presented approach has three major benefits. First, the solution may be applied to any groundwater model that can simulate flow to a fully penetrating well; the solution may be superimposed onto the solution for the fully penetrating well to simulate the local three-dimensional drawdown and flow field. Second, the approach is applicable to isotropic, anisotropic, and stratified aquifers and to both confined and unconfined flow. Third, the solution extends over a small area around the well only; outside this area the three-dimensional effect of the partially penetrating well is negligible, and no correction to the fully penetrating well is needed. A number of comparisons are made to existing three-dimensional, analytic solutions, including radial confined and unconfined flow and a well in a uniform flow field. It is shown that a subdivision in three layers is accurate for many practical cases; very accurate solutions are obtained with more layers.

  11. Characterization of Compton-scatter imaging with an analytical simulation method

    PubMed Central

    Jones, Kevin C; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V; Chu, James C H

    2018-01-01

    By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140–220 keV, and 40–50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min−1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible

  12. Characterization of Compton-scatter imaging with an analytical simulation method

    NASA Astrophysics Data System (ADS)

    Jones, Kevin C.; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V.; Chu, James C. H.

    2018-01-01

    By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140-220 keV, and 40-50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min-1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible

  13. Ecosystem history of South Florida; Biscayne Bay sediment core descriptions

    USGS Publications Warehouse

    Ishman, S.E.

    1997-01-01

    The 'Ecosystem History of Biscayne Bay and the southeast Coast' project of the U.S. Geological Survey is part of a multi-disciplinary effort that includes Florida Bay and the Everglades to provide paleoecologic reconstructions for the south Florida region. Reconstructions of past salinity, nutrients, substrate, and water quality are needed to determine ecosystem variability due to both natural and human-induced causes. Our understanding of the relations between the south Florida ecosystem and introduced forces will allow managers to make informed decisions regarding the south Florida ecosystem restoration and monitoring. The record of past ecosystem conditions can be found in shallow sediment cores. This U.S. Geological Survey Open-File Report describes six shallow sediment cores collected from Biscayne Bay. The cores described herein are being processed for a variety of analytical procedures, and this provides the descriptive framework for future analyses of the included cores. This report is preliminary and has not been reviewed for conformity with U.S. Geological Survey editorial standards or with the North American Stratigraphic Code. Any use of trade, product, or firm names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

  14. Validation of the enthalpy method by means of analytical solution

    NASA Astrophysics Data System (ADS)

    Kleiner, Thomas; Rückamp, Martin; Bondzio, Johannes; Humbert, Angelika

    2014-05-01

    Numerical simulations moved in the recent year(s) from describing the cold-temperate transition surface (CTS) towards an enthalpy description, which allows avoiding incorporating a singular surface inside the model (Aschwanden et al., 2012). In Enthalpy methods the CTS is represented as a level set of the enthalpy state variable. This method has several numerical and practical advantages (e.g. representation of the full energy by one scalar field, no restriction to topology and shape of the CTS). The proposed method is rather new in glaciology and to our knowledge not verified and validated against analytical solutions. Unfortunately we are still lacking analytical solutions for sufficiently complex thermo-mechanically coupled polythermal ice flow. However, we present two experiments to test the implementation of the enthalpy equation and corresponding boundary conditions. The first experiment tests particularly the functionality of the boundary condition scheme and the corresponding basal melt rate calculation. Dependent on the different thermal situations that occur at the base, the numerical code may have to switch to another boundary type (from Neuman to Dirichlet or vice versa). The main idea of this set-up is to test the reversibility during transients. A former cold ice body that run through a warmer period with an associated built up of a liquid water layer at the base must be able to return to its initial steady state. Since we impose several assumptions on the experiment design analytical solutions can be formulated for different quantities during distinct stages of the simulation. The second experiment tests the positioning of the internal CTS in a parallel-sided polythermal slab. We compare our simulation results to the analytical solution proposed by Greve and Blatter (2009). Results from three different ice flow-models (COMIce, ISSM, TIMFD3) are presented.

  15. Extended internal standard method for quantitative 1H NMR assisted by chromatography (EIC) for analyte overlapping impurity on 1H NMR spectra.

    PubMed

    Saito, Naoki; Kitamaki, Yuko; Otsuka, Satoko; Yamanaka, Noriko; Nishizaki, Yuzo; Sugimoto, Naoki; Imura, Hisanori; Ihara, Toshihide

    2018-07-01

    We devised a novel extended internal standard method of quantitative 1 H NMR (qNMR) assisted by chromatography (EIC) that accurately quantifies 1 H signal areas of analytes, even when the chemical shifts of the impurity and analyte signals overlap completely. When impurity and analyte signals overlap in the 1 H NMR spectrum but can be separated in a chromatogram, the response ratio of the impurity and an internal standard (IS) can be obtained from the chromatogram. If the response ratio can be converted into the 1 H signal area ratio of the impurity and the IS, the 1 H signal area of the analyte can be evaluated accurately by mathematically correcting the contributions of the 1 H signal area of the impurity overlapping the analyte in the 1 H NMR spectrum. In this study, gas chromatography and liquid chromatography were used. We used 2-chlorophenol and 4-chlorophenol containing phenol as an impurity as examples in which impurity and analyte signals overlap to validate and demonstrate the EIC, respectively. Because the 1 H signals of 2-chlorophenol and phenol can be separated in specific alkaline solutions, 2-chlorophenol is suitable to validate the EIC by comparing analytical value obtained by the EIC with that by only qNMR under the alkaline condition. By the EIC, the purity of 2-chlorophenol was obtained with a relative expanded uncertainty (k = 2) of 0.24%. The purity matched that obtained under the alkaline condition. Furthermore, the EIC was also validated by evaluating the phenol content with the absolute calibration curve method by gas chromatography. Finally, we demonstrated that the EIC was possible to evaluate the purity of 4-chlorophenol, with a relative expanded uncertainty (k = 2) of 0.22%, which was not able to be separated from the 1 H signal of phenol under any condition. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Analytical solutions of one-dimensional multispecies reactive transport in a permeable reactive barrier-aquifer system

    NASA Astrophysics Data System (ADS)

    Mieles, John; Zhan, Hongbin

    2012-06-01

    The permeable reactive barrier (PRB) remediation technology has proven to be more cost-effective than conventional pump-and-treat systems, and has demonstrated the ability to rapidly reduce the concentrations of specific chemicals of concern (COCs) by up to several orders of magnitude in some scenarios. This study derives new steady-state analytical solutions to multispecies reactive transport in a PRB-aquifer (dual domain) system. The advantage of the dual domain model is that it can account for the potential existence of natural degradation in the aquifer, when designing the required PRB thickness. The study focuses primarily on the steady-state analytical solutions of the tetrachloroethene (PCE) serial degradation pathway and secondly on the analytical solutions of the parallel degradation pathway. The solutions in this study can also be applied to other types of dual domain systems with distinct flow and transport properties. The steady-state analytical solutions are shown to be accurate and the numerical program RT3D is selected for comparison. The results of this study are novel in that the solutions provide improved modeling flexibility including: 1) every species can have unique first-order reaction rates and unique retardation factors, and 2) daughter species can be modeled with their individual input concentrations or solely as byproducts of the parent species. The steady-state analytical solutions exhibit a limitation that occurs when interspecies reaction rate factors equal each other, which result in undefined solutions. Excel spreadsheet programs were created to facilitate prompt application of the steady-state analytical solutions, for both the serial and parallel degradation pathways.

  17. Analytical model for fast reconnection in large guide field plasma configurations

    NASA Astrophysics Data System (ADS)

    Simakov, A. N.; Chacón, L.; Grasso, D.; Borgogno, D.; Zocco, A.

    2009-11-01

    Significant progress in understanding magnetic reconnection without a guide field was made recently by deriving quantitatively accurate analytical models for reconnection in electron [1] and Hall [2] MHD. However, no such analytical model is available for reconnection with a guide field. Here, we derive such an analytical model for the large-guide-field, low-β, cold-ion fluid model [3] with electron inertia, ion viscosity μ, and resistivity η. We find that the reconnection is Sweet-Parker-like when the Sweet-Parker layer thickness δSP> (ρs^4 + de^4)^1/4, with ρs and de the sound Larmor radius and electron inertial length. However, reconnection changes character otherwise, resulting in reconnection rates Ez/Bx^2 √2 η/μ (ρs^2 + de^2)/(ρsw) with Bx the upstream magnetic field and w the diffusion region length. Unlike the zero-guide-field case, μ plays crucial role in manifesting fast reconnection rates. If it represents the perpendicular viscosity [3], √η/μ ˜&-1circ;√(me/mi)(Ti/Te) and Ez becomes dissipation independent and therefore potentially fast.[0pt] [1] L. Chac'on, A. N. Simakov, and A. Zocco, PRL 99, 235001 (2007).[0pt] [2] A. N. Simakov and L. Chac'on, PRL 101, 105003 (2008).[0pt] [3] D. Biskamp, Magnetic reconnection in plasmas, Cambridge University Press, 2000.

  18. Capturing Accurate and Useful Information on Medication-Related Telenursing Triage Calls.

    PubMed

    Lake, R; Li, L; Baysari, M; Byrne, M; Robinson, M; Westbrook, J I

    2016-01-01

    Registered nurses providing telenursing triage and advice services record information on the medication related calls they handle. However the quality and consistency of these data were rarely examined. Our aim was to examine medication related calls made to the healthdirect advice service in November 2014, to assess their basic characteristics and how the data entry format influenced information collected and data consistency. Registered nurses selected the patient question type from a range of categories, and entered the medications involved in a free text field. Medication names were manually extracted from the free text fields. We also compared the selected patient question type with the free text description of the call, in order to gauge data consistency. Results showed that nurses provided patients with advice on medication-related queries in a timely matter (the median call duration of 9 minutes). From 1835 calls, we were able to identify and classify 2156 medications into 384 generic names. However, in 204 cases (11.2% of calls) no medication name was entered. A further 308 (15.0%) of the medication names entered were not identifiable. When we compared the selected patient question with the free text description of calls, we found that these were consistent in 63.27% of cases. Telenursing and triage advice services provide a valuable resource to the public with quick and easily accessible advice. To support nurses provide quality services and record accurate information about the queries, appropriate data entry format and design would be beneficial.

  19. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekechukwu, A.

    This document proposes to provide a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers, and books reviewed is given in Appendix 1. Available validation documents and guides are listed in the appendix; each has a brief description of application and use. In the referenced sources, there are varying approaches to validation and varying descriptions of validation at different stages in method development. This discussion focuses onmore » validation and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all documents were published in English.« less

  20. An efficient method for the calculation of mean extinction. I - The analyticity of the complex extinction efficiency of homogeneous spheres

    NASA Astrophysics Data System (ADS)

    Xing, Zhang-Fan; Greenberg, J. M.

    1992-11-01

    Results of an investigation of the analyticity of the complex extinction efficiency Q-tilde(ext) in different parameter domains are presented. In the size parameter domain, x = omega(a/c), numerical Hilbert transforms are used to study the analyticity properties of Q-tilde(ext) for homogeneous spheres. Q-tilde(ext) is found to be analytic in the entire lower complex x-tilde-plane when the refractive index, m, is fixed as a real constant (pure scattering) or infinity (perfect conductor); poles, however, appear in the left side of the lower complex x-tilde-plane as m becomes complex. The computation of the mean extinction produced by an extended size distribution of particles may be conveniently and accurately approximated using only a few values of the complex extinction evaluated in the complex plane.

  1. Hydraulic modeling of riverbank filtration systems with curved boundaries using analytic elements and series solutions

    NASA Astrophysics Data System (ADS)

    Bakker, Mark

    2010-08-01

    A new analytic solution approach is presented for the modeling of steady flow to pumping wells near rivers in strip aquifers; all boundaries of the river and strip aquifer may be curved. The river penetrates the aquifer only partially and has a leaky stream bed. The water level in the river may vary spatially. Flow in the aquifer below the river is semi-confined while flow in the aquifer adjacent to the river is confined or unconfined and may be subject to areal recharge. Analytic solutions are obtained through superposition of analytic elements and Fourier series. Boundary conditions are specified at collocation points along the boundaries. The number of collocation points is larger than the number of coefficients in the Fourier series and a solution is obtained in the least squares sense. The solution is analytic while boundary conditions are met approximately. Very accurate solutions are obtained when enough terms are used in the series. Several examples are presented for domains with straight and curved boundaries, including a well pumping near a meandering river with a varying water level. The area of the river bottom where water infiltrates into the aquifer is delineated and the fraction of river water in the well water is computed for several cases.

  2. Comprehensive identification and structural characterization of target components from Gelsemium elegans by high-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry based on accurate mass databases combined with MS/MS spectra.

    PubMed

    Liu, Yan-Chun; Xiao, Sa; Yang, Kun; Ling, Li; Sun, Zhi-Liang; Liu, Zhao-Ying

    2017-06-01

    This study reports an applicable analytical strategy of comprehensive identification and structure characterization of target components from Gelsemium elegans by using high-performance liquid chromatography quadrupole time-of-flight mass spectrometry (LC-QqTOF MS) based on the use of accurate mass databases combined with MS/MS spectra. The databases created included accurate masses and elemental compositions of 204 components from Gelsemium and their structural data. The accurate MS and MS/MS spectra were acquired through data-dependent auto MS/MS mode followed by an extraction of the potential compounds from the LC-QqTOF MS raw data of the sample. The same was matched using the databases to search for targeted components in the sample. The structures for detected components were tentatively characterized by manually interpreting the accurate MS/MS spectra for the first time. A total of 57 components have been successfully detected and structurally characterized from the crude extracts of G. elegans, but has failed to differentiate some isomers. This analytical strategy is generic and efficient, avoids isolation and purification procedures, enables a comprehensive structure characterization of target components of Gelsemium and would be widely applicable for complicated mixtures that are derived from Gelsemium preparations. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Testing and Analytical Modeling for Purging Process of a Cryogenic Line

    NASA Technical Reports Server (NTRS)

    Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.

    2015-01-01

    To gain confidence in developing analytical models of the purging process for the cryogenic main propulsion systems of upper stage, two test series were conducted. The test article, a 3.35 m long with the diameter of 20 cm incline line, was filled with liquid or gaseous hydrogen and then purged with gaseous helium (GHe). Total of 10 tests were conducted. The influences of GHe flow rates and initial temperatures were evaluated. The Generalized Fluid System Simulation Program (GFSSP), an in-house general-purpose fluid system analyzer computer program, was utilized to model and simulate selective tests. The test procedures, modeling descriptions, and the results are presented in the following sections.

  4. Testing and Analytical Modeling for Purging Process of a Cryogenic Line

    NASA Technical Reports Server (NTRS)

    Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.

    2013-01-01

    To gain confidence in developing analytical models of the purging process for the cryogenic main propulsion systems of upper stage, two test series were conducted. The test article, a 3.35 m long with the diameter of 20 cm incline line, was filled with liquid or gaseous hydrogen and then purged with gaseous helium (GHe). Total of 10 tests were conducted. The influences of GHe flow rates and initial temperatures were evaluated. The Generalized Fluid System Simulation Program (GFSSP), an in-house general-purpose fluid system analyzer computer program, was utilized to model and simulate selective tests. The test procedures, modeling descriptions, and the results are presented in the following sections.

  5. Real-Time and Accurate Identification of Single Oligonucleotide Photoisomers via an Aerolysin Nanopore.

    PubMed

    Hu, Zheng-Li; Li, Zi-Yuan; Ying, Yi-Lun; Zhang, Junji; Cao, Chan; Long, Yi-Tao; Tian, He

    2018-04-03

    Identification of the configuration for the photoresponsive oligonucleotide plays an important role in the ingenious design of DNA nanomolecules and nanodevices. Due to the limited resolution and sensitivity of present methods, it remains a challenge to determine the accurate configuration of photoresponsive oligonucleotides, much less a precise description of their photoconversion process. Here, we used an aerolysin (AeL) nanopore-based confined space for real-time determination and quantification of the absolute cis/ trans configuration of each azobenzene-modified oligonucleotide (Azo-ODN) with a single molecule resolution. The two completely separated current distributions with narrow peak widths at half height (<0.62 pA) are assigned to cis/ trans-Azo-ODN isomers, respectively. Due to the high current sensitivity, each isomer of Azo-ODN could be undoubtedly identified, which gives the accurate photostationary conversion values of 82.7% for trans-to- cis under UV irradiation and 82.5% for cis-to- trans under vis irradiation. Further real-time kinetic evaluation reveals that the photoresponsive rate constants of Azo-ODN from trans-to- cis and cis-to -trans are 0.43 and 0.20 min -1 , respectively. This study will promote the sophisticated design of photoresponsive ODN to achieve an efficient and applicable photocontrollable process.

  6. On the accurate analysis of vibroacoustics in head insert gradient coils.

    PubMed

    Winkler, Simone A; Alejski, Andrew; Wade, Trevor; McKenzie, Charles A; Rutt, Brian K

    2017-10-01

    To accurately analyze vibroacoustics in MR head gradient coils. A detailed theoretical model for gradient coil vibroacoustics, including the first description and modeling of Lorentz damping, is introduced and implemented in a multiphysics software package. Numerical finite-element method simulations were used to establish a highly accurate vibroacoustic model in head gradient coils in detail, including the newly introduced Lorentz damping effect. Vibroacoustic coupling was examined through an additional modal analysis. Thorough experimental studies were used to validate simulations. Average experimental sound pressure levels (SPLs) and accelerations over the 0-3000 Hz frequency range were 97.6 dB, 98.7 dB, and 95.4 dB, as well as 20.6 g, 8.7 g, and 15.6 g for the X-, Y-, and Z-gradients, respectively. A reasonable agreement between simulations and measurements was achieved. Vibroacoustic coupling showed a coupled resonance at 2300 Hz for the Z-gradient that is responsible for a sharp peak and the highest SPL value in the acoustic spectrum. We have developed and used more realistic multiphysics simulation methods to gain novel insights into the underlying concepts for vibroacoustics in head gradient coils, which will permit improved analyses of existing gradient coils and novel SPL reduction strategies for future gradient coil designs. Magn Reson Med 78:1635-1645, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  7. The predictive accuracy of analytical formulas and semiclassical approaches for α decay half-lives of superheavy nuclei

    NASA Astrophysics Data System (ADS)

    Zhao, T. L.; Bao, X. J.; Guo, S. Q.

    2018-02-01

    Systematic calculations on the α decay half-lives are performed by using three analytical formulas and two semiclassical approaches. For the three analytical formulas, the experimental α decay half-lives and {Q}α values of the 66 reference nuclei have been used to obtain the coefficients. We get only four adjustable parameters to describe α decay half-lives for even-even, odd-A, and odd-odd nuclei. By comparison between the calculated values from ten analytical formulas and experimental data, it is shown that the new universal decay law (NUDL) foumula is the most accurate one to reproduce the experimental α decay half-lives of the superheavy nuclei (SHN). Meanwhile it is found that the experimental α decay half-lives of SHN are well reproduced by the Royer formula although many parameters are contained. The results show that the NUDL formula and the generalized liquid drop model (GLDM2) with consideration of the preformation factor can give fairly equivalent results for the superheavy nuclei.

  8. Terminal Area Productivity Airport Wind Analysis and Chicago O'Hare Model Description

    NASA Technical Reports Server (NTRS)

    Hemm, Robert; Shapiro, Gerald

    1998-01-01

    This paper describes two results from a continuing effort to provide accurate cost-benefit analyses of the NASA Terminal Area Productivity (TAP) program technologies. Previous tasks have developed airport capacity and delay models and completed preliminary cost benefit estimates for TAP technologies at 10 U.S. airports. This task covers two improvements to the capacity and delay models. The first improvement is the completion of a detailed model set for the Chicago O'Hare (ORD) airport. Previous analyses used a more general model to estimate the benefits for ORD. This paper contains a description of the model details with results corresponding to current conditions. The second improvement is the development of specific wind speed and direction criteria for use in the delay models to predict when the Aircraft Vortex Spacing System (AVOSS) will allow use of reduced landing separations. This paper includes a description of the criteria and an estimate of AVOSS utility for 10 airports based on analysis of 35 years of weather data.

  9. Microfluidic paper-based analytical devices for potential use in quantitative and direct detection of disease biomarkers in clinical analysis.

    PubMed

    Lim, Wei Yin; Goh, Boon Tong; Khor, Sook Mei

    2017-08-15

    Clinicians, working in the health-care diagnostic systems of developing countries, currently face the challenges of rising costs, increased number of patient visits, and limited resources. A significant trend is using low-cost substrates to develop microfluidic devices for diagnostic purposes. Various fabrication techniques, materials, and detection methods have been explored to develop these devices. Microfluidic paper-based analytical devices (μPADs) have gained attention for sensing multiplex analytes, confirming diagnostic test results, rapid sample analysis, and reducing the volume of samples and analytical reagents. μPADs, which can provide accurate and reliable direct measurement without sample pretreatment, can reduce patient medical burden and yield rapid test results, aiding physicians in choosing appropriate treatment. The objectives of this review are to provide an overview of the strategies used for developing paper-based sensors with enhanced analytical performances and to discuss the current challenges, limitations, advantages, disadvantages, and future prospects of paper-based microfluidic platforms in clinical diagnostics. μPADs, with validated and justified analytical performances, can potentially improve the quality of life by providing inexpensive, rapid, portable, biodegradable, and reliable diagnostics. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Analytical modeling and analysis of magnetic field and torque for novel axial flux eddy current couplers with PM excitation

    NASA Astrophysics Data System (ADS)

    Li, Zhao; Wang, Dazhi; Zheng, Di; Yu, Linxin

    2017-10-01

    Rotational permanent magnet eddy current couplers are promising devices for torque and speed transmission without any mechanical contact. In this study, flux-concentration disk-type permanent magnet eddy current couplers with double conductor rotor are investigated. Given the drawback of the accurate three-dimensional finite element method, this paper proposes a mixed two-dimensional analytical modeling approach. Based on this approach, the closed-form expressions of magnetic field, eddy current, electromagnetic force and torque for such devices are obtained. Finally, a three-dimensional finite element method is employed to validate the analytical results. Besides, a prototype is manufactured and tested for the torque-speed characteristic.

  11. Analytical Chemistry (edited by R. Kellner, J.- M. Mermet, M. Otto, and H. M. Widmer)

    NASA Astrophysics Data System (ADS)

    Thompson, Reviewed By Robert Q.

    2000-04-01

    marginal notes. The text is divided into 5 parts (General Topics, Chemical Analysis, Physical Analysis, Computer-Based Analytical Chemistry, and Total Analysis Systems), 16 sections, and many chapters and subsections, all numbered and with headings for easy reference. The book provides comprehensive coverage of analytical science. Many curricula in North America cling to the tired notion of one semester of classical analytical (wet) chemistry followed by a second semester of instrumental analysis, and publishers continue to respond by publishing separate texts for each course. The Europeans, in contrast, have a text that bridges this artificial gap. Included are chapters and subsections on chemical equilibrium, electronic and vibrational spectroscopy, separations, and electrochemistry (found in most first courses in analytical chemistry). The authors also address atomic spectroscopy in all of its forms, luminescence, mass spectrometry, NMR spectrometry, surface analysis, thermal methods, activation analysis, and automated methods of analysis (found in most instrumental courses). Additional, uncommon chapters on chemical and biochemical sensors, immunoassay, chemometrics, miniaturized systems, and process analytical chemistry point toward the present and future of analytical science. The only glaring omission in comparison to other instrumental texts is in the area of measurement systems and electronics. No mention is made of the analytical laboratory, such as descriptions of glassware calibration and suggested experiments, as is found in most quantitative analysis texts in the U.S. The dangers in any multi-authored book include an uneven treatment of topics and a lack of cohesiveness and logical development of topics. I found some evidence of these problems in Analytical Chemistry. My first reaction to the Table of Contents and the grouping of chapters was "Where is ?" and "What about ?" While the order of topics in an analytical chemistry course always is open to debate

  12. Accurate analysis of parabens in human urine using isotope-dilution ultrahigh-performance liquid chromatography-high resolution mass spectrometry.

    PubMed

    Zhou, Hui-Ting; Chen, Hsin-Chang; Ding, Wang-Hsien

    2018-02-20

    An analytical method that utilizes isotope-dilution ultrahigh-performance liquid chromatography coupled with hybrid quadrupole time-of-flight mass spectrometry (UHPLC-QTOF-MS or called UHPLC-HRMS) was developed, and validated to be highly precise and accurate for the detection of nine parabens (methyl-, ethyl-, propyl-, isopropyl-, butyl-, isobutyl-, pentyl-, hexyl-, and benzyl-parabens) in human urine samples. After sample preparation by ultrasound-assisted emulsification microextraction (USAEME), the extract was directly injected into UHPLC-HRMS. By using negative electrospray ionization in the multiple reaction monitoring (MRM) mode and measuring the peak area ratios of both the natural and the labeled-analogues in the samples and calibration standards, the target analytes could be accurately identified and quantified. Another use for the labeled-analogues was to correct for systematic errors associated with the analysis, such as the matrix effect and other variations. The limits of quantitation (LOQs) were ranging from 0.3 to 0.6 ng/mL. High precisions for both repeatability and reproducibility were obtained ranging from 1 to 8%. High trueness (mean extraction recovery, or called accuracy) ranged from 93 to 107% on two concentration levels. According to preliminary results, the total concentrations of four most detected parabens (methyl-, ethyl-, propyl- and butyl-) ranged from 0.5 to 79.1 ng/mL in male urine samples, and from 17 to 237 ng/mL in female urine samples. Interestingly, two infrequently detected pentyl- and hexyl-parabens were found in one of the male samples in this study. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Comparing numerical and analytic approximate gravitational waveforms

    NASA Astrophysics Data System (ADS)

    Afshari, Nousha; Lovelace, Geoffrey; SXS Collaboration

    2016-03-01

    A direct observation of gravitational waves will test Einstein's theory of general relativity under the most extreme conditions. The Laser Interferometer Gravitational-Wave Observatory, or LIGO, began searching for gravitational waves in September 2015 with three times the sensitivity of initial LIGO. To help Advanced LIGO detect as many gravitational waves as possible, a major research effort is underway to accurately predict the expected waves. In this poster, I will explore how the gravitational waveform produced by a long binary-black-hole inspiral, merger, and ringdown is affected by how fast the larger black hole spins. In particular, I will present results from simulations of merging black holes, completed using the Spectral Einstein Code (black-holes.org/SpEC.html), including some new, long simulations designed to mimic black hole-neutron star mergers. I will present comparisons of the numerical waveforms with analytic approximations.

  14. Towards accurate cosmological predictions for rapidly oscillating scalar fields as dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ureña-López, L. Arturo; Gonzalez-Morales, Alma X., E-mail: lurena@ugto.mx, E-mail: alma.gonzalez@fisica.ugto.mx

    2016-07-01

    As we are entering the era of precision cosmology, it is necessary to count on accurate cosmological predictions from any proposed model of dark matter. In this paper we present a novel approach to the cosmological evolution of scalar fields that eases their analytic and numerical analysis at the background and at the linear order of perturbations. The new method makes use of appropriate angular variables that simplify the writing of the equations of motion, and which also show that the usual field variables play a secondary role in the cosmological dynamics. We apply the method to a scalar fieldmore » endowed with a quadratic potential and revisit its properties as dark matter. Some of the results known in the literature are recovered, and a better understanding of the physical properties of the model is provided. It is confirmed that there exists a Jeans wavenumber k {sub J} , directly related to the suppression of linear perturbations at wavenumbers k > k {sub J} , and which is verified to be k {sub J} = a √ mH . We also discuss some semi-analytical results that are well satisfied by the full numerical solutions obtained from an amended version of the CMB code CLASS. Finally we draw some of the implications that this new treatment of the equations of motion may have in the prediction of cosmological observables from scalar field dark matter models.« less

  15. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan

    PubMed Central

    2017-01-01

    %). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395

  16. Dependence of yield of nuclear track-biosensors on track radius and analyte concentration

    NASA Astrophysics Data System (ADS)

    García-Arellano, H.; Muñoz H., G.; Fink, D.; Vacik, J.; Hnatowicz, V.; Alfonta, L.; Kiv, A.

    2018-04-01

    In swift heavy ion track-based polymeric biosensor foils with incorporated enzymes one exploits the correlation between the analyte concentration and the sensor current, via the enrichment of charged enzymatic reaction products in the track's confinement. Here we study the influence of the etched track radius on the biosensor's efficiency. These sensors are analyte-specific only if both the track radii and the analyte concentration exceed certain threshold values of ∼15 nm and ∼10-6 M (for glucose sensing), respectively. Below these limits the sensor signal stems un-specifically from any charge carrier. In its proper working regime, the inner track walls are smoothly covered by enzymes and the efficiency is practically radius independent. Theory shows that the measured current should be slightly sub-proportional to the analyte concentration; the measurements roughly reconfirm this. Narrower tracks (∼5-15 nm radius) with reduced enzyme coverage lead to decreasing efficiency. Tiny signals visible when the tracks are etched to effective radii between 0 and ∼5 nm are tentatively ascribed to enzymes bonded to surface-near nano-cracks in the polymer foil, resulting from its degradation due to aging, rather than to the tracks. Precondition for this study was the accurate determination of the etched track radii, which is possible only by a nanofluidic approach. This holds to some extent even for enzyme-covered tracks, though in this case most of the wall charges are compensated by enzyme bonding.

  17. Empirical and semi-analytical models for predicting peak outflows caused by embankment dam failures

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Chen, Yunliang; Wu, Chao; Peng, Yong; Song, Jiajun; Liu, Wenjun; Liu, Xin

    2018-07-01

    Prediction of peak discharge of floods has attracted great attention for researchers and engineers. In present study, nine typical nonlinear mathematical models are established based on database of 40 historical dam failures. The first eight models that were developed with a series of regression analyses are purely empirical, while the last one is a semi-analytical approach that was derived from an analytical solution of dam-break floods in a trapezoidal channel. Water depth above breach invert (Hw), volume of water stored above breach invert (Vw), embankment length (El), and average embankment width (Ew) are used as independent variables to develop empirical formulas of estimating the peak outflow from breached embankment dams. It is indicated from the multiple regression analysis that a function using the former two variables (i.e., Hw and Vw) produce considerably more accurate results than that using latter two variables (i.e., El and Ew). It is shown that the semi-analytical approach works best in terms of both prediction accuracy and uncertainty, and the established empirical models produce considerably reasonable results except the model only using El. Moreover, present models have been compared with other models available in literature for estimating peak discharge.

  18. The Analytical Solution of the Transient Radial Diffusion Equation with a Nonuniform Loss Term.

    NASA Astrophysics Data System (ADS)

    Loridan, V.; Ripoll, J. F.; De Vuyst, F.

    2017-12-01

    Many works have been done during the past 40 years to perform the analytical solution of the radial diffusion equation that models the transport and loss of electrons in the magnetosphere, considering a diffusion coefficient proportional to a power law in shell and a constant loss term. Here, we propose an original analytical method to address this challenge with a nonuniform loss term. The strategy is to match any L-dependent electron losses with a piecewise constant function on M subintervals, i.e., dealing with a constant lifetime on each subinterval. Applying an eigenfunction expansion method, the eigenvalue problem becomes presently a Sturm-Liouville problem with M interfaces. Assuming the continuity of both the distribution function and its first spatial derivatives, we are able to deal with a well-posed problem and to find the full analytical solution. We further show an excellent agreement between both the analytical solutions and the solutions obtained directly from numerical simulations for different loss terms of various shapes and with a diffusion coefficient DLL L6. We also give two expressions for the required number of eigenmodes N to get an accurate snapshot of the analytical solution, highlighting that N is proportional to 1/√t0, where t0 is a time of interest, and that N increases with the diffusion power. Finally, the equilibrium time, defined as the time to nearly reach the steady solution, is estimated by a closed-form expression and discussed. Applications to Earth and also Jupiter and Saturn are discussed.

  19. Non-degenerate two-photon absorption in silicon waveguides. Analytical and experimental study

    DOE PAGES

    Zhang, Yanbing; Husko, Chad; Lefrancois, Simon; ...

    2015-06-22

    We theoretically and experimentally investigate the nonlinear evolution of two optical pulses in a silicon waveguide. We provide an analytic solution for the weak probe wave undergoing non-degenerate two-photon absorption (TPA) from the strong pump. At larger pump intensities, we employ a numerical solution to study the interplay between TPA and photo-generated free carriers. We develop a simple and powerful approach to extract and separate out the distinct loss contributions of TPA and free-carrier absorption from readily available experimental data. Our analysis accounts accurately for experimental results in silicon photonic crystal waveguides.

  20. On analytic modeling of lunar perturbations of artificial satellites of the earth

    NASA Astrophysics Data System (ADS)

    Lane, M. T.

    1989-06-01

    Two different procedures for analytically modeling the effects of the moon's direct gravitational force on artificial earth satellites are discussed from theoretical and numerical viewpoints. One is developed using classical series expansions of inclination and eccentricity for both the satellite and the moon, and the other employs the method of averaging. Both solutions are seen to have advantages, but it is shown that while the former is more accurate in special situations, the latter is quicker and more practical for the general orbit determination problem where observed data are used to correct the orbit in near real time.

  1. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    DOE PAGES

    Tao, Jianmin; Rappe, Andrew M.

    2016-01-20

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C 6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C 8 and C 10 between small molecules. We findmore » that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C 8 and 7% for C 10. As a result, inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.« less

  2. Pre-analytical issues in the haemostasis laboratory: guidance for the clinical laboratories.

    PubMed

    Magnette, A; Chatelain, M; Chatelain, B; Ten Cate, H; Mullier, F

    2016-01-01

    Ensuring quality has become a daily requirement in laboratories. In haemostasis, even more than in other disciplines of biology, quality is determined by a pre-analytical step that encompasses all procedures, starting with the formulation of the medical question, and includes patient preparation, sample collection, handling, transportation, processing, and storage until time of analysis. This step, based on a variety of manual activities, is the most vulnerable part of the total testing process and is a major component of the reliability and validity of results in haemostasis and constitutes the most important source of erroneous or un-interpretable results. Pre-analytical errors may occur throughout the testing process and arise from unsuitable, inappropriate or wrongly handled procedures. Problems may arise during the collection of blood specimens such as misidentification of the sample, use of inadequate devices or needles, incorrect order of draw, prolonged tourniquet placing, unsuccessful attempts to locate the vein, incorrect use of additive tubes, collection of unsuitable samples for quality or quantity, inappropriate mixing of a sample, etc. Some factors can alter the result of a sample constituent after collection during transportation, preparation and storage. Laboratory errors can often have serious adverse consequences. Lack of standardized procedures for sample collection accounts for most of the errors encountered within the total testing process. They can also have clinical consequences as well as a significant impact on patient care, especially those related to specialized tests as these are often considered as "diagnostic". Controlling pre-analytical variables is critical since this has a direct influence on the quality of results and on their clinical reliability. The accurate standardization of the pre-analytical phase is of pivotal importance for achieving reliable results of coagulation tests and should reduce the side effects of the influence

  3. An Accurate New Potential Function for Ground-State X{e}_2 from UV and Virial Coefficient Data

    NASA Astrophysics Data System (ADS)

    Le Roy, Robert J.; Mackie, J. Cameron; Chandrasekhar, Pragna

    2011-06-01

    Determining accurate analytic pair potentials for rare gas dimers has been a longstanding goal in molecular physics. However, most potential energy functions reported to date fail to optimally represent the available spectroscopic data, in spite of the fact that such data provide constraints of unparalleled precision on the attractive potential energy wells of these species. A recent study of ArXe showed that it is a straightforward matter to combine multi-isotopologue spectroscopic data (in that case, microwave, and high resolution UV measurements) and virial coefficients in a direct fit to obtain a flexible analytic potential function that incorporates the theoretically predicted damped inverse-power long-range behaviour. The present work reports the application of this approach to Xe_2, with a direct fit to high resolution rotationally resolved UV emission data for v''=0 and 1, band head data for v''=0-9, and virial coefficient data for T=165-950 K being used to obtain an accurate new potential energy function for the ground state of this Van der Waals molecule. Analogous results for other rare-gas pairs will also be presented, as time permits. L. Piticco, F. Merkt, A.A. Cholewinski, F.R. McCourt and R.J. Le Roy, J. Mol. Spectrosc. 264, 83 (2010). A. Wüest and K.G. Bruin and F. Merkt, Can. J. Chem. 82, 750 (2004). D.E. Freeman, K. Yoshino, and Y. Tanaka, J. Chem. Phys. 61, 4880 (1974). J.H. Dymond, K.N. Marsh, R.C. Wilhoit and K.C. Wong, in Landold-Börnstein, New Series, Group IV, edited by M. Frenkel and K.N. Marsh, Vol. 21 (2003).

  4. Accurate simulations of helium pick-up experiments using a rejection-free Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Dutra, Matthew; Hinde, Robert

    2018-04-01

    In this paper, we present Monte Carlo simulations of helium droplet pick-up experiments with the intention of developing a robust and accurate theoretical approach for interpreting experimental helium droplet calorimetry data. Our approach is capable of capturing the evaporative behavior of helium droplets following dopant acquisition, allowing for a more realistic description of the pick-up process. Furthermore, we circumvent the traditional assumption of bulk helium behavior by utilizing density functional calculations of the size-dependent helium droplet chemical potential. The results of this new Monte Carlo technique are compared to commonly used Poisson pick-up statistics for simulations that reflect a broad range of experimental parameters. We conclude by offering an assessment of both of these theoretical approaches in the context of our observed results.

  5. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    PubMed Central

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from

  6. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    PubMed

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research

  7. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  8. Quantitative evaluation of analyte transport on microfluidic paper-based analytical devices (μPADs).

    PubMed

    Ota, Riki; Yamada, Kentaro; Suzuki, Koji; Citterio, Daniel

    2018-02-07

    The transport efficiency during capillary flow-driven sample transport on microfluidic paper-based analytical devices (μPADs) made from filter paper has been investigated for a selection of model analytes (Ni 2+ , Zn 2+ , Cu 2+ , PO 4 3- , bovine serum albumin, sulforhodamine B, amaranth) representing metal cations, complex anions, proteins and anionic molecules. For the first time, the transport of the analytical target compounds rather than the sample liquid, has been quantitatively evaluated by means of colorimetry and absorption spectrometry-based methods. The experiments have revealed that small paperfluidic channel dimensions, additional user operation steps (e.g. control of sample volume, sample dilution, washing step) as well as the introduction of sample liquid wicking areas allow to increase analyte transport efficiency. It is also shown that the interaction of analytes with the negatively charged cellulosic paper substrate surface is strongly influenced by the physico-chemical properties of the model analyte and can in some cases (Cu 2+ ) result in nearly complete analyte depletion during sample transport. The quantitative information gained through these experiments is expected to contribute to the development of more sensitive μPADs.

  9. Issues in Developing a Normative Descriptive Model for Dyadic Decision Making

    NASA Technical Reports Server (NTRS)

    Serfaty, D.; Kleinman, D. L.

    1984-01-01

    Most research in modelling human information processing and decision making has been devoted to the case of the single human operator. In the present effort, concepts from the fields of organizational behavior, engineering psychology, team theory and mathematical modelling are merged in an attempt to consider first the case of two cooperating decisionmakers (the Dyad) in a multi-task environment. Rooted in the well-known Dynamic Decision Model (DDM), the normative descriptive approach brings basic cognitive and psychophysical characteristics inherent to human behavior into a team theoretic analytic framework. An experimental paradigm, involving teams in dynamic decision making tasks, is designed to produce the data with which to build the theoretical model.

  10. Predicted osteotomy planes are accurate when using patient-specific instrumentation for total knee arthroplasty in cadavers: a descriptive analysis.

    PubMed

    Kievit, A J; Dobbe, J G G; Streekstra, G J; Blankevoort, L; Schafroth, M U

    2018-06-01

    Malalignment of implants is a major source of failure during total knee arthroplasty. To achieve more accurate 3D planning and execution of the osteotomy cuts during surgery, the Signature (Biomet, Warsaw) patient-specific instrumentation (PSI) was used to produce pin guides for the positioning of the osteotomy blocks by means of computer-aided manufacture based on CT scan images. The research question of this study is: what is the transfer accuracy of osteotomy planes predicted by the Signature PSI system for preoperative 3D planning and intraoperative block-guided pin placement to perform total knee arthroplasty procedures? The transfer accuracy achieved by using the Signature PSI system was evaluated by comparing the osteotomy planes predicted preoperatively with the osteotomy planes seen intraoperatively in human cadaveric legs. Outcomes were measured in terms of translational and rotational errors (varus, valgus, flexion, extension and axial rotation) for both tibia and femur osteotomies. Average translational errors between the osteotomy planes predicted using the Signature system and the actual osteotomy planes achieved was 0.8 mm (± 0.5 mm) for the tibia and 0.7 mm (± 4.0 mm) for the femur. Average rotational errors in relation to predicted and achieved osteotomy planes were 0.1° (± 1.2°) of varus and 0.4° (± 1.7°) of anterior slope (extension) for the tibia, and 2.8° (± 2.0°) of varus and 0.9° (± 2.7°) of flexion and 1.4° (± 2.2°) of external rotation for the femur. The similarity between osteotomy planes predicted using the Signature system and osteotomy planes actually achieved was excellent for the tibia although some discrepancies were seen for the femur. The use of 3D system techniques in TKA surgery can provide accurate intraoperative guidance, especially for patients with deformed bone, tailored to individual patients and ensure better placement of the implant.

  11. A new tool for the evaluation of the analytical procedure: Green Analytical Procedure Index.

    PubMed

    Płotka-Wasylka, J

    2018-05-01

    A new means for assessing analytical protocols relating to green analytical chemistry attributes has been developed. The new tool, called GAPI (Green Analytical Procedure Index), evaluates the green character of an entire analytical methodology, from sample collection to final determination, and was created using such tools as the National Environmental Methods Index (NEMI) or Analytical Eco-Scale to provide not only general but also qualitative information. In GAPI, a specific symbol with five pentagrams can be used to evaluate and quantify the environmental impact involved in each step of an analytical methodology, mainly from green through yellow to red depicting low, medium to high impact, respectively. The proposed tool was used to evaluate analytical procedures applied in the determination of biogenic amines in wine samples, and polycyclic aromatic hydrocarbon determination by EPA methods. GAPI tool not only provides an immediately perceptible perspective to the user/reader but also offers exhaustive information on evaluated procedures. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Visual Analytics 101

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.

    This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics

  13. Palm: Easing the Burden of Analytical Performance Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Hoisie, Adolfy

    2014-06-01

    Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less

  14. Hardware description languages

    NASA Technical Reports Server (NTRS)

    Tucker, Jerry H.

    1994-01-01

    Hardware description languages are special purpose programming languages. They are primarily used to specify the behavior of digital systems and are rapidly replacing traditional digital system design techniques. This is because they allow the designer to concentrate on how the system should operate rather than on implementation details. Hardware description languages allow a digital system to be described with a wide range of abstraction, and they support top down design techniques. A key feature of any hardware description language environment is its ability to simulate the modeled system. The two most important hardware description languages are Verilog and VHDL. Verilog has been the dominant language for the design of application specific integrated circuits (ASIC's). However, VHDL is rapidly gaining in popularity.

  15. Adiabatic description of capture into resonance and surfatron acceleration of charged particles by electromagnetic waves.

    PubMed

    Artemyev, A V; Neishtadt, A I; Zelenyi, L M; Vainchtein, D L

    2010-12-01

    We present an analytical and numerical study of the surfatron acceleration of nonrelativistic charged particles by electromagnetic waves. The acceleration is caused by capture of particles into resonance with one of the waves. We investigate capture for systems with one or two waves and provide conditions under which the obtained results can be applied to systems with more than two waves. In the case of a single wave, the once captured particles never leave the resonance and their velocity grows linearly with time. However, if there are two waves in the system, the upper bound of the energy gain may exist and we find the analytical value of that bound. We discuss several generalizations including the relativistic limit, different wave amplitudes, and a wide range of the waves' wavenumbers. The obtained results are used for qualitative description of some phenomena observed in the Earth's magnetosphere. © 2010 American Institute of Physics.

  16. The Biosurveillance Analytics Resource Directory (BARD): Facilitating the use of epidemiological models for infectious disease surveillance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margevicius, Kristen J.; Generous, Nicholas; Abeyta, Esteban

    Epidemiological modeling for infectious disease is important for disease management and its routine implementation needs to be facilitated through better description of models in an operational context. A standardized model characterization process that allows selection or making manual comparisons of available models and their results is currently lacking. A key need is a universal framework to facilitate model description and understanding of its features. Los Alamos National Laboratory (LANL) has developed a comprehensive framework that can be used to characterize an infectious disease model in an operational context. The framework was developed through a consensus among a panel of subjectmore » matter experts. In this paper, we describe the framework, its application to model characterization, and the development of the Biosurveillance Analytics Resource Directory (BARD; http://brd.bsvgateway.org/brd/), to facilitate the rapid selection of operational models for specific infectious/communicable diseases. We offer this framework and associated database to stakeholders of the infectious disease modeling field as a tool for standardizing model description and facilitating the use of epidemiological models.« less

  17. The Biosurveillance Analytics Resource Directory (BARD): Facilitating the Use of Epidemiological Models for Infectious Disease Surveillance

    PubMed Central

    Margevicius, Kristen J; Generous, Nicholas; Abeyta, Esteban; Althouse, Ben; Burkom, Howard; Castro, Lauren; Daughton, Ashlynn; Del Valle, Sara Y.; Fairchild, Geoffrey; Hyman, James M.; Kiang, Richard; Morse, Andrew P.; Pancerella, Carmen M.; Pullum, Laura; Ramanathan, Arvind; Schlegelmilch, Jeffrey; Scott, Aaron; Taylor-McCabe, Kirsten J; Vespignani, Alessandro; Deshpande, Alina

    2016-01-01

    Epidemiological modeling for infectious disease is important for disease management and its routine implementation needs to be facilitated through better description of models in an operational context. A standardized model characterization process that allows selection or making manual comparisons of available models and their results is currently lacking. A key need is a universal framework to facilitate model description and understanding of its features. Los Alamos National Laboratory (LANL) has developed a comprehensive framework that can be used to characterize an infectious disease model in an operational context. The framework was developed through a consensus among a panel of subject matter experts. In this paper, we describe the framework, its application to model characterization, and the development of the Biosurveillance Analytics Resource Directory (BARD; http://brd.bsvgateway.org/brd/), to facilitate the rapid selection of operational models for specific infectious/communicable diseases. We offer this framework and associated database to stakeholders of the infectious disease modeling field as a tool for standardizing model description and facilitating the use of epidemiological models. PMID:26820405

  18. The Biosurveillance Analytics Resource Directory (BARD): Facilitating the Use of Epidemiological Models for Infectious Disease Surveillance.

    PubMed

    Margevicius, Kristen J; Generous, Nicholas; Abeyta, Esteban; Althouse, Ben; Burkom, Howard; Castro, Lauren; Daughton, Ashlynn; Del Valle, Sara Y; Fairchild, Geoffrey; Hyman, James M; Kiang, Richard; Morse, Andrew P; Pancerella, Carmen M; Pullum, Laura; Ramanathan, Arvind; Schlegelmilch, Jeffrey; Scott, Aaron; Taylor-McCabe, Kirsten J; Vespignani, Alessandro; Deshpande, Alina

    2016-01-01

    Epidemiological modeling for infectious disease is important for disease management and its routine implementation needs to be facilitated through better description of models in an operational context. A standardized model characterization process that allows selection or making manual comparisons of available models and their results is currently lacking. A key need is a universal framework to facilitate model description and understanding of its features. Los Alamos National Laboratory (LANL) has developed a comprehensive framework that can be used to characterize an infectious disease model in an operational context. The framework was developed through a consensus among a panel of subject matter experts. In this paper, we describe the framework, its application to model characterization, and the development of the Biosurveillance Analytics Resource Directory (BARD; http://brd.bsvgateway.org/brd/), to facilitate the rapid selection of operational models for specific infectious/communicable diseases. We offer this framework and associated database to stakeholders of the infectious disease modeling field as a tool for standardizing model description and facilitating the use of epidemiological models.

  19. The Biosurveillance Analytics Resource Directory (BARD): Facilitating the use of epidemiological models for infectious disease surveillance

    DOE PAGES

    Margevicius, Kristen J.; Generous, Nicholas; Abeyta, Esteban; ...

    2016-01-28

    Epidemiological modeling for infectious disease is important for disease management and its routine implementation needs to be facilitated through better description of models in an operational context. A standardized model characterization process that allows selection or making manual comparisons of available models and their results is currently lacking. A key need is a universal framework to facilitate model description and understanding of its features. Los Alamos National Laboratory (LANL) has developed a comprehensive framework that can be used to characterize an infectious disease model in an operational context. The framework was developed through a consensus among a panel of subjectmore » matter experts. In this paper, we describe the framework, its application to model characterization, and the development of the Biosurveillance Analytics Resource Directory (BARD; http://brd.bsvgateway.org/brd/), to facilitate the rapid selection of operational models for specific infectious/communicable diseases. We offer this framework and associated database to stakeholders of the infectious disease modeling field as a tool for standardizing model description and facilitating the use of epidemiological models.« less

  20. Selected Analytical Methods for Environmental Remediation ...

    EPA Pesticide Factsheets

    The US Environmental Protection Agency’s Office of Research and Development (ORD) conducts cutting-edge research that provides the underpinning of science and technology for public health and environmental policies and decisions made by federal, state and other governmental organizations. ORD’s six research programs identify the pressing research needs with input from EPA offices and stakeholders. Research is conducted by ORD’s 3 labs, 4 centers, and 2 offices located in 14 facilities. The EPA booth at APHL will have several resources available to attendees, mostly in the form of print materials, that showcase our research labs, case studies of research activities, and descriptions of specific research projects. The Selected Analytical Methods for Environmental Remediation and Recovery (SAM), a library of selected methods that are helping to increase the nation's laboratory capacity to support large-scale emergency response operations, will be demoed by EPA scientists at the APHL Experience booth in the Exhibit Hall on Tuesday during the morning break. Please come to the EPA booth #309 for more information! To be on a loop at our ORD booth demo during APHL.

  1. A surrogate analyte method to determine D-serine in mouse brain using liquid chromatography-tandem mass spectrometry.

    PubMed

    Kinoshita, Kohnosuke; Jingu, Shigeji; Yamaguchi, Jun-ichi

    2013-01-15

    A bioanalytical method for determining endogenous d-serine levels in the mouse brain using a surrogate analyte and liquid chromatography-tandem mass spectrometry (LC-MS/MS) was developed. [2,3,3-(2)H]D-serine and [(15)N]D-serine were used as a surrogate analyte and an internal standard, respectively. The surrogate analyte was spiked into brain homogenate to yield calibration standards and quality control (QC) samples. Both endogenous and surrogate analytes were extracted using protein precipitation followed by solid phase extraction. Enantiomeric separation was achieved on a chiral crown ether column with an analysis time of only 6 min without any derivatization. The column eluent was introduced into an electrospray interface of a triple-quadrupole mass spectrometer. The calibration range was 1.00 to 300 nmol/g, and the method showed acceptable accuracy and precision at all QC concentration levels from a validation point of view. In addition, the brain d-serine levels of normal mice determined using this method were the same as those obtained by a standard addition method, which is time-consuming but is often used for the accurate measurement of endogenous substances. Thus, this surrogate analyte method should be applicable to the measurement of d-serine levels as a potential biomarker for monitoring certain effects of drug candidates on the central nervous system. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Machine learning and predictive data analytics enabling metrology and process control in IC fabrication

    NASA Astrophysics Data System (ADS)

    Rana, Narender; Zhang, Yunlin; Wall, Donald; Dirahoui, Bachir; Bailey, Todd C.

    2015-03-01

    Integrate circuit (IC) technology is going through multiple changes in terms of patterning techniques (multiple patterning, EUV and DSA), device architectures (FinFET, nanowire, graphene) and patterning scale (few nanometers). These changes require tight controls on processes and measurements to achieve the required device performance, and challenge the metrology and process control in terms of capability and quality. Multivariate data with complex nonlinear trends and correlations generally cannot be described well by mathematical or parametric models but can be relatively easily learned by computing machines and used to predict or extrapolate. This paper introduces the predictive metrology approach which has been applied to three different applications. Machine learning and predictive analytics have been leveraged to accurately predict dimensions of EUV resist patterns down to 18 nm half pitch leveraging resist shrinkage patterns. These patterns could not be directly and accurately measured due to metrology tool limitations. Machine learning has also been applied to predict the electrical performance early in the process pipeline for deep trench capacitance and metal line resistance. As the wafer goes through various processes its associated cost multiplies. It may take days to weeks to get the electrical performance readout. Predicting the electrical performance early on can be very valuable in enabling timely actionable decision such as rework, scrap, feedforward, feedback predicted information or information derived from prediction to improve or monitor processes. This paper provides a general overview of machine learning and advanced analytics application in the advanced semiconductor development and manufacturing.

  3. Improved analytic extreme-mass-ratio inspiral model for scoping out eLISA data analysis

    NASA Astrophysics Data System (ADS)

    Chua, Alvin J. K.; Gair, Jonathan R.

    2015-12-01

    The space-based gravitational-wave detector eLISA has been selected as the ESA L3 mission, and the mission design will be finalized by the end of this decade. To prepare for mission formulation over the next few years, several outstanding and urgent questions in data analysis will be addressed using mock data challenges, informed by instrument measurements from the LISA Pathfinder satellite launching at the end of 2015. These data challenges will require accurate and computationally affordable waveform models for anticipated sources such as the extreme-mass-ratio inspirals (EMRIs) of stellar-mass compact objects into massive black holes. Previous data challenges have made use of the well-known analytic EMRI waveforms of Barack and Cutler, which are extremely quick to generate but dephase relative to more accurate waveforms within hours, due to their mismatched radial, polar and azimuthal frequencies. In this paper, we describe an augmented Barack-Cutler model that uses a frequency map to the correct Kerr frequencies, along with updated evolution equations and a simple fit to a more accurate model. The augmented waveforms stay in phase for months and may be generated with virtually no additional computational cost.

  4. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  5. Earthdata Cloud Analytics Project

    NASA Technical Reports Server (NTRS)

    Ramachandran, Rahul; Lynnes, Chris

    2018-01-01

    This presentation describes a nascent project in NASA to develop a framework to support end-user analytics of NASA's Earth science data in the cloud. The chief benefit of migrating EOSDIS (Earth Observation System Data and Information Systems) data to the cloud is to position the data next to enormous computing capacity to allow end users to process data at scale. The Earthdata Cloud Analytics project will user a service-based approach to facilitate the infusion of evolving analytics technology and the integration with non-NASA analytics or other complementary functionality at other agencies and in other nations.

  6. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  7. Bodies obliged and unbound: differentiated response tendencies for injunctive and descriptive social norms.

    PubMed

    Jacobson, Ryan P; Mortensen, Chad R; Cialdini, Robert B

    2011-03-01

    The authors suggest that injunctive and descriptive social norms engage different psychological response tendencies when made selectively salient. On the basis of suggestions derived from the focus theory of normative conduct and from consideration of the norms' functions in social life, the authors hypothesized that the 2 norms would be cognitively associated with different goals, would lead individuals to focus on different aspects of self, and would stimulate different levels of conflict over conformity decisions. Additionally, a unique role for effortful self-regulation was hypothesized for each type of norm-used as a means to resist conformity to descriptive norms but as a means to facilitate conformity for injunctive norms. Four experiments supported these hypotheses. Experiment 1 demonstrated differences in the norms' associations to the goals of making accurate/efficient decisions and gaining/maintaining social approval. Experiment 2 provided evidence that injunctive norms lead to a more interpersonally oriented form of self-awareness and to a greater feeling of conflict about conformity decisions than descriptive norms. In the final 2 experiments, conducted in the lab (Experiment 3) and in a naturalistic environment (Experiment 4), self-regulatory depletion decreased conformity to an injunctive norm (Experiments 3 and 4) and increased conformity to a descriptive norm (Experiment 4)-even though the norms advocated identical behaviors. By illustrating differentiated response tendencies for each type of social norm, this research provides new and converging support for the focus theory of normative conduct. (c) 2011 APA, all rights reserved

  8. Analytical treatment of the deformation behavior of EUVL masks during electrostatic chucking

    NASA Astrophysics Data System (ADS)

    Brandstetter, Gerd; Govindjee, Sanjay

    2012-03-01

    A new analytical approach is presented to predict mask deformation during electro-static chucking in next generation extreme-ultraviolet-lithography (EUVL). Given an arbitrary profile measurement of the mask and chuck non-flatness, this method has been developed as an alternative to time-consuming finite element simulations for overlay error correction algorithms. We consider the feature transfer of each harmonic component in the profile shapes via linear elasticity theory and demonstrate analytically how high spatial frequencies are filtered. The method is compared to presumably more accurate finite element simulations and has been tested successfully in an overlay error compensation experiment, where the residual error y-component could be reduced by a factor 2. As a side outcome, the formulation provides a tool to estimate the critical pin-size and -pitch such that the distortion on the mask front-side remains within given tolerances. We find for a numerical example that pin-pitches of less than 5 mm will result in a mask pattern-distortion of less than 1 nm if the chucking pressure is below 30 kPa.

  9. Selection and authentication of botanical materials for the development of analytical methods.

    PubMed

    Applequist, Wendy L; Miller, James S

    2013-05-01

    Herbal products, for example botanical dietary supplements, are widely used. Analytical methods are needed to ensure that botanical ingredients used in commercial products are correctly identified and that research materials are of adequate quality and are sufficiently characterized to enable research to be interpreted and replicated. Adulteration of botanical material in commerce is common for some species. The development of analytical methods for specific botanicals, and accurate reporting of research results, depend critically on correct identification of test materials. Conscious efforts must therefore be made to ensure that the botanical identity of test materials is rigorously confirmed and documented through preservation of vouchers, and that their geographic origin and handling are appropriate. Use of material with an associated herbarium voucher that can be botanically identified is always ideal. Indirect methods of authenticating bulk material in commerce, for example use of organoleptic, anatomical, chemical, or molecular characteristics, are not always acceptable for the chemist's purposes. Familiarity with botanical and pharmacognostic literature is necessary to determine what potential adulterants exist and how they may be distinguished.

  10. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  11. s -wave scattering length of a Gaussian potential

    NASA Astrophysics Data System (ADS)

    Jeszenszki, Peter; Cherny, Alexander Yu.; Brand, Joachim

    2018-04-01

    We provide accurate expressions for the s -wave scattering length for a Gaussian potential well in one, two, and three spatial dimensions. The Gaussian potential is widely used as a pseudopotential in the theoretical description of ultracold-atomic gases, where the s -wave scattering length is a physically relevant parameter. We first describe a numerical procedure to compute the value of the s -wave scattering length from the parameters of the Gaussian, but find that its accuracy is limited in the vicinity of singularities that result from the formation of new bound states. We then derive simple analytical expressions that capture the correct asymptotic behavior of the s -wave scattering length near the bound states. Expressions that are increasingly accurate in wide parameter regimes are found by a hierarchy of approximations that capture an increasing number of bound states. The small number of numerical coefficients that enter these expressions is determined from accurate numerical calculations. The approximate formulas combine the advantages of the numerical and approximate expressions, yielding an accurate and simple description from the weakly to the strongly interacting limit.

  12. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    DOE PAGES

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; ...

    2017-10-17

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less

  13. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details ofmore » electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF & RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF & RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.« less

  14. An accurate and efficient laser-envelope solver for the modeling of laser-plasma accelerators

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Geddes, C. G. R.; Esarey, E.; Leemans, W. P.

    2018-01-01

    Detailed and reliable numerical modeling of laser-plasma accelerators (LPAs), where a short and intense laser pulse interacts with an underdense plasma over distances of up to a meter, is a formidably challenging task. This is due to the great disparity among the length scales involved in the modeling, ranging from the micron scale of the laser wavelength to the meter scale of the total laser-plasma interaction length. The use of the time-averaged ponderomotive force approximation, where the laser pulse is described by means of its envelope, enables efficient modeling of LPAs by removing the need to model the details of electron motion at the laser wavelength scale. Furthermore, it allows simulations in cylindrical geometry which captures relevant 3D physics at 2D computational cost. A key element of any code based on the time-averaged ponderomotive force approximation is the laser envelope solver. In this paper we present the accurate and efficient envelope solver used in the code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde). The features of the INF&RNO laser solver enable an accurate description of the laser pulse evolution deep into depletion even at a reasonably low resolution, resulting in significant computational speed-ups.

  15. An analytical solution for two-dimensional vacuum preloading combined with electro-osmosis consolidation using EKG electrodes

    PubMed Central

    Qiu, Chenchen; Li, Yande

    2017-01-01

    China is a country with vast territory, but economic development and population growth have reduced the usable land resources in recent years. Therefore, reclamation by pumping and filling is carried out in eastern coastal regions of China in order to meet the needs of urbanization. However, large areas of reclaimed land need rapid drainage consolidation treatment. Based on past researches on how to improve the treatment efficiency of soft clay using vacuum preloading combined with electro-osmosis, a two-dimensional drainage plane model was proposed according to the Terzaghi and Esrig consolidation theory. However, the analytical solution using two-dimensional plane model was never involved. Current analytical solutions can’t have a thorough theoretical analysis of practical engineering and give relevant guidance. Considering the smearing effect and the rectangle arrangement pattern, an analytical solution is derived to describe the behavior of pore-water and the consolidation process by using EKG (electro-kinetic geo synthetics) materials. The functions of EKG materials include drainage, electric conduction and corrosion resistance. Comparison with test results is carried out to verify the analytical solution. It is found that the measured value is larger than the applied vacuum degree because of the stacking effect of the vacuum preloading and electro-osmosis. The trends of the mean measured value and the mean analytical value processes are comparable. Therefore, the consolidation model can accurately assess the change in pore-water pressure and the consolidation process during vacuum preloading combined with electro-osmosis. PMID:28771496

  16. Accurate modeling and inversion of electrical resistivity data in the presence of metallic infrastructure with known location and dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Timothy C.; Wellman, Dawn M.

    2015-06-26

    Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method ismore » implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.« less

  17. Data analytics and parallel-coordinate materials property charts

    NASA Astrophysics Data System (ADS)

    Rickman, Jeffrey M.

    2018-01-01

    It is often advantageous to display material properties relationships in the form of charts that highlight important correlations and thereby enhance our understanding of materials behavior and facilitate materials selection. Unfortunately, in many cases, these correlations are highly multidimensional in nature, and one typically employs low-dimensional cross-sections of the property space to convey some aspects of these relationships. To overcome some of these difficulties, in this work we employ methods of data analytics in conjunction with a visualization strategy, known as parallel coordinates, to represent better multidimensional materials data and to extract useful relationships among properties. We illustrate the utility of this approach by the construction and systematic analysis of multidimensional materials properties charts for metallic and ceramic systems. These charts simplify the description of high-dimensional geometry, enable dimensional reduction and the identification of significant property correlations and underline distinctions among different materials classes.

  18. One-dimensional model of interacting-step fluctuations on vicinal surfaces: Analytical formulas and kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Patrone, Paul N.; Einstein, T. L.; Margetis, Dionisios

    2010-12-01

    We study analytically and numerically a one-dimensional model of interacting line defects (steps) fluctuating on a vicinal crystal. Our goal is to formulate and validate analytical techniques for approximately solving systems of coupled nonlinear stochastic differential equations (SDEs) governing fluctuations in surface motion. In our analytical approach, the starting point is the Burton-Cabrera-Frank (BCF) model by which step motion is driven by diffusion of adsorbed atoms on terraces and atom attachment-detachment at steps. The step energy accounts for entropic and nearest-neighbor elastic-dipole interactions. By including Gaussian white noise to the equations of motion for terrace widths, we formulate large systems of SDEs under different choices of diffusion coefficients for the noise. We simplify this description via (i) perturbation theory and linearization of the step interactions and, alternatively, (ii) a mean-field (MF) approximation whereby widths of adjacent terraces are replaced by a self-consistent field but nonlinearities in step interactions are retained. We derive simplified formulas for the time-dependent terrace-width distribution (TWD) and its steady-state limit. Our MF analytical predictions for the TWD compare favorably with kinetic Monte Carlo simulations under the addition of a suitably conservative white noise in the BCF equations.

  19. TOPICA: an accurate and efficient numerical tool for analysis and design of ICRF antennas

    NASA Astrophysics Data System (ADS)

    Lancellotti, V.; Milanesio, D.; Maggiora, R.; Vecchi, G.; Kyrytsya, V.

    2006-07-01

    The demand for a predictive tool to help in designing ion-cyclotron radio frequency (ICRF) antenna systems for today's fusion experiments has driven the development of codes such as ICANT, RANT3D, and the early development of TOPICA (TOrino Polytechnic Ion Cyclotron Antenna) code. This paper describes the substantive evolution of TOPICA formulation and implementation that presently allow it to handle the actual geometry of ICRF antennas (with curved, solid straps, a general-shape housing, Faraday screen, etc) as well as an accurate plasma description, accounting for density and temperature profiles and finite Larmor radius effects. The antenna is assumed to be housed in a recess-like enclosure. Both goals have been attained by formally separating the problem into two parts: the vacuum region around the antenna and the plasma region inside the toroidal chamber. Field continuity and boundary conditions allow formulating of a set of two coupled integral equations for the unknown equivalent (current) sources; then the equations are reduced to a linear system by a method of moments solution scheme employing 2D finite elements defined over a 3D non-planar surface triangular-cell mesh. In the vacuum region calculations are done in the spatial (configuration) domain, whereas in the plasma region a spectral (wavenumber) representation of fields and currents is adopted, thus permitting a description of the plasma by a surface impedance matrix. Owing to this approach, any plasma model can be used in principle, and at present the FELICE code has been employed. The natural outcomes of TOPICA are the induced currents on the conductors (antenna, housing, etc) and the electric field in front of the plasma, whence the antenna circuit parameters (impedance/scattering matrices), the radiated power and the fields (at locations other than the chamber aperture) are then obtained. An accurate model of the feeding coaxial lines is also included. The theoretical model and its TOPICA

  20. Graphical Descriptives: A Way to Improve Data Transparency and Methodological Rigor in Psychology.

    PubMed

    Tay, Louis; Parrigon, Scott; Huang, Qiming; LeBreton, James M

    2016-09-01

    Several calls have recently been issued to the social sciences for enhanced transparency of research processes and enhanced rigor in the methodological treatment of data and data analytics. We propose the use of graphical descriptives (GDs) as one mechanism for responding to both of these calls. GDs provide a way to visually examine data. They serve as quick and efficient tools for checking data distributions, variable relations, and the potential appropriateness of different statistical analyses (e.g., do data meet the minimum assumptions for a particular analytic method). Consequently, we believe that GDs can promote increased transparency in the journal review process, encourage best practices for data analysis, and promote a more inductive approach to understanding psychological data. We illustrate the value of potentially including GDs as a step in the peer-review process and provide a user-friendly online resource (www.graphicaldescriptives.org) for researchers interested in including data visualizations in their research. We conclude with suggestions on how GDs can be expanded and developed to enhance transparency. © The Author(s) 2016.