Sample records for constant comparative approach

  1. Comparing Vibrationally Averaged Nuclear Shielding Constants by Quantum Diffusion Monte Carlo and Second-Order Perturbation Theory.

    PubMed

    Ng, Yee-Hong; Bettens, Ryan P A

    2016-03-03

    Using the method of modified Shepard's interpolation to construct potential energy surfaces of the H2O, O3, and HCOOH molecules, we compute vibrationally averaged isotropic nuclear shielding constants ⟨σ⟩ of the three molecules via quantum diffusion Monte Carlo (QDMC). The QDMC results are compared to that of second-order perturbation theory (PT), to see if second-order PT is adequate for obtaining accurate values of nuclear shielding constants of molecules with large amplitude motions. ⟨σ⟩ computed by the two approaches differ for the hydrogens and carbonyl oxygen of HCOOH, suggesting that for certain molecules such as HCOOH where big displacements away from equilibrium happen (internal OH rotation), ⟨σ⟩ of experimental quality may only be obtainable with the use of more sophisticated and accurate methods, such as quantum diffusion Monte Carlo. The approach of modified Shepard's interpolation is also extended to construct shielding constants σ surfaces of the three molecules. By using a σ surface with the equilibrium geometry as a single data point to compute isotropic nuclear shielding constants for each descendant in the QDMC ensemble representing the ground state wave function, we reproduce the results obtained through ab initio computed σ to within statistical noise. Development of such an approach could thereby alleviate the need for any future costly ab initio σ calculations.

  2. Variable Mach number design approach for a parallel waverider with a wide-speed range based on the osculating cone theory

    NASA Astrophysics Data System (ADS)

    Zhao, Zhen-tao; Huang, Wei; Li, Shi-Bin; Zhang, Tian-Tian; Yan, Li

    2018-06-01

    In the current study, a variable Mach number waverider design approach has been proposed based on the osculating cone theory. The design Mach number of the osculating cone constant Mach number waverider with the same volumetric efficiency of the osculating cone variable Mach number waverider has been determined by writing a program for calculating the volumetric efficiencies of waveriders. The CFD approach has been utilized to verify the effectiveness of the proposed approach. At the same time, through the comparative analysis of the aerodynamic performance, the performance advantage of the osculating cone variable Mach number waverider is studied. The obtained results show that the osculating cone variable Mach number waverider owns higher lift-to-drag ratio throughout the flight profile when compared with the osculating cone constant Mach number waverider, and it has superior low-speed aerodynamic performance while maintaining nearly the same high-speed aerodynamic performance.

  3. Measuring the Gas Constant "R": Propagation of Uncertainty and Statistics

    ERIC Educational Resources Information Center

    Olsen, Robert J.; Sattar, Simeen

    2013-01-01

    Determining the gas constant "R" by measuring the properties of hydrogen gas collected in a gas buret is well suited for comparing two approaches to uncertainty analysis using a single data set. The brevity of the experiment permits multiple determinations, allowing for statistical evaluation of the standard uncertainty u[subscript…

  4. Comparison between the basic least squares and the Bayesian approach for elastic constants identification

    NASA Astrophysics Data System (ADS)

    Gogu, C.; Haftka, R.; LeRiche, R.; Molimard, J.; Vautrin, A.; Sankar, B.

    2008-11-01

    The basic formulation of the least squares method, based on the L2 norm of the misfit, is still widely used today for identifying elastic material properties from experimental data. An alternative statistical approach is the Bayesian method. We seek here situations with significant difference between the material properties found by the two methods. For a simple three bar truss example we illustrate three such situations in which the Bayesian approach leads to more accurate results: different magnitude of the measurements, different uncertainty in the measurements and correlation among measurements. When all three effects add up, the Bayesian approach can have a large advantage. We then compared the two methods for identification of elastic constants from plate vibration natural frequencies.

  5. Absolute NMR shielding scales and nuclear spin–rotation constants in {sup 175}LuX and {sup 197}AuX (X = {sup 19}F, {sup 35}Cl, {sup 79}Br and {sup 127}I)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demissie, Taye B., E-mail: taye.b.demissie@uit.no; Komorovsky, Stanislav; Repisky, Michal

    2015-10-28

    We present nuclear spin–rotation constants, absolute nuclear magnetic resonance (NMR) shielding constants, and shielding spans of all the nuclei in {sup 175}LuX and {sup 197}AuX (X = {sup 19}F, {sup 35}Cl, {sup 79}Br, {sup 127}I), calculated using coupled-cluster singles-and-doubles with a perturbative triples (CCSD(T)) correction theory, four-component relativistic density functional theory (relativistic DFT), and non-relativistic DFT. The total nuclear spin–rotation constants determined by adding the relativistic corrections obtained from DFT calculations to the CCSD(T) values are in general in agreement with available experimental data, indicating that the computational approach followed in this study allows us to predict reliable results formore » the unknown spin–rotation constants in these molecules. The total NMR absolute shielding constants are determined for all the nuclei following the same approach as that applied for the nuclear spin–rotation constants. In most of the molecules, relativistic effects significantly change the computed shielding constants, demonstrating that straightforward application of the non-relativistic formula relating the electronic contribution to the nuclear spin–rotation constants and the paramagnetic contribution to the shielding constants does not yield correct results. We also analyze the origin of the unusually large absolute shielding constant and its relativistic correction of gold in AuF compared to the other gold monohalides.« less

  6. Nuclear-relaxed elastic and piezoelectric constants of materials: Computational aspects of two quantum-mechanical approaches.

    PubMed

    Erba, Alessandro; Caglioti, Dominique; Zicovich-Wilson, Claudio Marcelo; Dovesi, Roberto

    2017-02-15

    Two alternative approaches for the quantum-mechanical calculation of the nuclear-relaxation term of elastic and piezoelectric tensors of crystalline materials are illustrated and their computational aspects discussed: (i) a numerical approach based on the geometry optimization of atomic positions at strained lattice configurations and (ii) a quasi-analytical approach based on the evaluation of the force- and displacement-response internal-strain tensors as combined with the interatomic force-constant matrix. The two schemes are compared both as regards their computational accuracy and performance. The latter approach, not being affected by the many numerical parameters and procedures of a typical quasi-Newton geometry optimizer, constitutes a more reliable and robust mean to the evaluation of such properties, at a reduced computational cost for most crystalline systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Evaluation of uncertainty in the adjustment of fundamental constants

    NASA Astrophysics Data System (ADS)

    Bodnar, Olha; Elster, Clemens; Fischer, Joachim; Possolo, Antonio; Toman, Blaza

    2016-02-01

    Combining multiple measurement results for the same quantity is an important task in metrology and in many other areas. Examples include the determination of fundamental constants, the calculation of reference values in interlaboratory comparisons, or the meta-analysis of clinical studies. However, neither the GUM nor its supplements give any guidance for this task. Various approaches are applied such as weighted least-squares in conjunction with the Birge ratio or random effects models. While the former approach, which is based on a location-scale model, is particularly popular in metrology, the latter represents a standard tool used in statistics for meta-analysis. We investigate the reliability and robustness of the location-scale model and the random effects model with particular focus on resulting coverage or credible intervals. The interval estimates are obtained by adopting a Bayesian point of view in conjunction with a non-informative prior that is determined by a currently favored principle for selecting non-informative priors. Both approaches are compared by applying them to simulated data as well as to data for the Planck constant and the Newtonian constant of gravitation. Our results suggest that the proposed Bayesian inference based on the random effects model is more reliable and less sensitive to model misspecifications than the approach based on the location-scale model.

  8. E-health empowers patients with ulcerative colitis: a randomised controlled trial of the web-guided 'Constant-care' approach.

    PubMed

    Elkjaer, Margarita; Shuhaibar, Mary; Burisch, Johan; Bailey, Yvonne; Scherfig, Hanne; Laugesen, Birgit; Avnstrøm, Søren; Langholz, Ebbe; O'Morain, Colm; Lynge, Elsebeth; Munkholm, Pia

    2010-12-01

    The natural history of ulcerative colitis requires continuous monitoring of medical treatment via frequent outpatient visits. The European health authorities' focus on e-health is increasing. Lack of easy access to inflammatory bowel disease (IBD) clinics, patients' education and understanding of the importance of early treatment at relapse is leading to poor compliance. To overcome these limitations a randomised control trial 'Constant-care' was undertaken in Denmark and Ireland. 333 patients with mild/moderate ulcerative colitis and 5-aminosalicylate acid treatment were randomised to either a web-group receiving disease specific education and self-treatment via http://www.constant-care.dk or a control group continuing the usual care for 12 months. A historical control group was included to test the comparability with the control group. We investigated: feasibility of the approach, its influence on patients' compliance, knowledge, quality of life (QoL), disease outcomes, safety and health care costs. 88% of the web patients preferred using the new approach. Adherence to 4 weeks of acute treatment was increased by 31% in Denmark and 44% in Ireland compared to the control groups. In Denmark IBD knowledge and QoL were significantly improved in web patients. Median relapse duration was 18 days (95% CI 10 to 21) in the web versus 77 days (95% CI 46 to 108) in the control group. The number of acute and routine visits to the outpatient clinic was lower in the web than in the control group, resulting in a saving of 189 euro/patient/year. No difference in the relapse frequency, hospitalisation, surgery or adverse events was observed. The historical control group was comparable with the control group. The new web-guided approach on http://www.constant-care.dk is feasible, safe and cost effective. It empowers patients with ulcerative colitis without increasing their morbidity and depression. It has yet to be shown whether this strategy can change the natural disease course of ulcerative colitis in the long term.

  9. Integration of biotic ligand models (BLM) and bioaccumulation kinetics into a mechanistic framework for metal uptake in aquatic organisms.

    PubMed

    Veltman, Karin; Huijbregts, Mark A J; Hendriks, A Jan

    2010-07-01

    Both biotic ligand models (BLM) and bioaccumulation models aim to quantify metal exposure based on mechanistic knowledge, but key factors included in the description of metal uptake differ between the two approaches. Here, we present a quantitative comparison of both approaches and show that BLM and bioaccumulation kinetics can be merged into a common mechanistic framework for metal uptake in aquatic organisms. Our results show that metal-specific absorption efficiencies calculated from BLM-parameters for freshwater fish are highly comparable, i.e. within a factor of 2.4 for silver, cadmium, copper, and zinc, to bioaccumulation-absorption efficiencies for predominantly marine fish. Conditional affinity constants are significantly related to the metal-specific covalent index. Additionally, the affinity constants of calcium, cadmium, copper, sodium, and zinc are significantly comparable across aquatic species, including molluscs, daphnids, and fish. This suggests that affinity constants can be estimated from the covalent index, and constants can be extrapolated across species. A new model is proposed that integrates the combined effect of metal chemodynamics, as speciation, competition, and ligand affinity, and species characteristics, as size, on metal uptake by aquatic organisms. An important direction for further research is the quantitative comparison of the proposed model with acute toxicity values for organisms belonging to different size classes.

  10. Estimation of hydrolysis rate constants for carbamates ...

    EPA Pesticide Factsheets

    Cheminformatics based tools, such as the Chemical Transformation Simulator under development in EPA’s Office of Research and Development, are being increasingly used to evaluate chemicals for their potential to degrade in the environment or be transformed through metabolism. Hydrolysis represents a major environmental degradation pathway; unfortunately, only a small fraction of hydrolysis rates for about 85,000 chemicals on the Toxic Substances Control Act (TSCA) inventory are in public domain, making it critical to develop in silico approaches to estimate hydrolysis rate constants. In this presentation, we compare three complementary approaches to estimate hydrolysis rates for carbamates, an important chemical class widely used in agriculture as pesticides, herbicides and fungicides. Fragment-based Quantitative Structure Activity Relationships (QSARs) using Hammett-Taft sigma constants are widely published and implemented for relatively simple functional groups such as carboxylic acid esters, phthalate esters, and organophosphate esters, and we extend these to carbamates. We also develop a pKa based model and a quantitative structure property relationship (QSPR) model, and evaluate them against measured rate constants using R square and root mean square (RMS) error. Our work shows that for our relatively small sample size of carbamates, a Hammett-Taft based fragment model performs best, followed by a pKa and a QSPR model. This presentation compares three comp

  11. Polynomial Expressions for Estimating Elastic Constants From the Resonance of Circular Plates

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Singh, Abhishek

    2005-01-01

    Two approaches were taken to make convenient spread sheet calculations of elastic constants from resonance data and the tables in ASTM C1259 and E1876: polynomials were fit to the tables; and an automated spread sheet interpolation routine was generated. To compare the approaches, the resonant frequencies of circular plates made of glass, hardened maraging steel, alpha silicon carbide, silicon nitride, tungsten carbide, tape cast NiO-YSZ, and zinc selenide were measured. The elastic constants, as calculated via the polynomials and linear interpolation of the tabular data in ASTM C1259 and E1876, were found comparable for engineering purposes, with the differences typically being less than 0.5 percent. Calculation of additional v values at t/R between 0 and 0.2 would allow better curve fits. This is not necessary for common engineering purposes, however, it might benefit the testing of emerging thin structures such as fuel cell electrolytes, gas conversion membranes, and coatings when Poisson s ratio is less than 0.15 and high precision is needed.

  12. What Theorists Say They Do: A Brief Description of Theorists' Approaches

    ERIC Educational Resources Information Center

    Christie, Christina A.; Azzam, Tarek

    2005-01-01

    The purpose of this issue of "New Directions for Evaluation" is to examine, comparatively, the practical application of theorists' approaches to evaluation by examining four evaluations of the same case. The thought is that when asked to evaluate the same program (holding the case constant), the practical distinctions between theorists' approaches…

  13. Effect on head-wind profiles and mean head-wind velocity on landing capacity flying constant-airspeed and constant-groundspeed approaches

    NASA Technical Reports Server (NTRS)

    Hastings, E. C., Jr.; Kelley, W. W.

    1979-01-01

    A study was conducted to determine the effect of head-wind profiles and mean head-wind velocities on runway landing capacity for airplanes flying constant-airspeed and constant-groundspeed approaches. It was determined that when the wind profiles were encountered with the currently used constant airspeed approach method, the landing capacity was reduced. The severity of these reductions increased as the mean head-wind value of the profile increased. When constant-groundspeed approaches were made in the same wind profiles, there were no losses in landing capacity. In an analysis of mean head winds, it was determined that in a mean head wind of 35 knots, the landing capacity using constant-airspeed approaches was 13% less than for the no wind condition. There were no reductions in landing capacity with constant-groundspeed approaches for mean head winds less than 35 knots. This same result was observed when the separation intervals between airplanes was reduced.

  14. Simulation of weak polyelectrolytes: a comparison between the constant pH and the reaction ensemble method

    NASA Astrophysics Data System (ADS)

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-03-01

    The reaction ensemble and the constant pH method are well-known chemical equilibrium approaches to simulate protonation and deprotonation reactions in classical molecular dynamics and Monte Carlo simulations. In this article, we demonstrate the similarity between both methods under certain conditions. We perform molecular dynamics simulations of a weak polyelectrolyte in order to compare the titration curves obtained by both approaches. Our findings reveal a good agreement between the methods when the reaction ensemble is used to sweep the reaction constant. Pronounced differences between the reaction ensemble and the constant pH method can be observed for stronger acids and bases in terms of adaptive pH values. These deviations are due to the presence of explicit protons in the reaction ensemble method which induce a screening of electrostatic interactions between the charged titrable groups of the polyelectrolyte. The outcomes of our simulation hint to a better applicability of the reaction ensemble method for systems in confined geometries and titrable groups in polyelectrolytes with different pKa values.

  15. Variations in the fine-structure constant constraining gravity theories

    NASA Astrophysics Data System (ADS)

    Bezerra, V. B.; Cunha, M. S.; Muniz, C. R.; Tahim, M. O.; Vieira, H. S.

    2016-08-01

    In this paper, we investigate how the fine-structure constant, α, locally varies in the presence of a static and spherically symmetric gravitational source. The procedure consists in calculating the solution and the energy eigenvalues of a massive scalar field around that source, considering the weak-field regime. From this result, we obtain expressions for a spatially variable fine-structure constant by considering suitable modifications in the involved parameters admitting some scenarios of semi-classical and quantum gravities. Constraints on free parameters of the approached theories are calculated from astrophysical observations of the emission spectra of a white dwarf. Such constraints are finally compared with those obtained in the literature.

  16. Nuclear magnetic resonance shielding constants and chemical shifts in linear 199Hg compounds: a comparison of three relativistic computational methods.

    PubMed

    Arcisauskaite, Vaida; Melo, Juan I; Hemmingsen, Lars; Sauer, Stephan P A

    2011-07-28

    We investigate the importance of relativistic effects on NMR shielding constants and chemical shifts of linear HgL(2) (L = Cl, Br, I, CH(3)) compounds using three different relativistic methods: the fully relativistic four-component approach and the two-component approximations, linear response elimination of small component (LR-ESC) and zeroth-order regular approximation (ZORA). LR-ESC reproduces successfully the four-component results for the C shielding constant in Hg(CH(3))(2) within 6 ppm, but fails to reproduce the Hg shielding constants and chemical shifts. The latter is mainly due to an underestimation of the change in spin-orbit contribution. Even though ZORA underestimates the absolute Hg NMR shielding constants by ∼2100 ppm, the differences between Hg chemical shift values obtained using ZORA and the four-component approach without spin-density contribution to the exchange-correlation (XC) kernel are less than 60 ppm for all compounds using three different functionals, BP86, B3LYP, and PBE0. However, larger deviations (up to 366 ppm) occur for Hg chemical shifts in HgBr(2) and HgI(2) when ZORA results are compared with four-component calculations with non-collinear spin-density contribution to the XC kernel. For the ZORA calculations it is necessary to use large basis sets (QZ4P) and the TZ2P basis set may give errors of ∼500 ppm for the Hg chemical shifts, despite deceivingly good agreement with experimental data. A Gaussian nucleus model for the Coulomb potential reduces the Hg shielding constants by ∼100-500 ppm and the Hg chemical shifts by 1-143 ppm compared to the point nucleus model depending on the atomic number Z of the coordinating atom and the level of theory. The effect on the shielding constants of the lighter nuclei (C, Cl, Br, I) is, however, negligible. © 2011 American Institute of Physics

  17. Computing sextic centrifugal distortion constants by DFT: A benchmark analysis on halogenated compounds

    NASA Astrophysics Data System (ADS)

    Pietropolli Charmet, Andrea; Stoppa, Paolo; Tasinato, Nicola; Giorgianni, Santi

    2017-05-01

    This work presents a benchmark study on the calculation of the sextic centrifugal distortion constants employing cubic force fields computed by means of density functional theory (DFT). For a set of semi-rigid halogenated organic compounds several functionals (B2PLYP, B3LYP, B3PW91, M06, M06-2X, O3LYP, X3LYP, ωB97XD, CAM-B3LYP, LC-ωPBE, PBE0, B97-1 and B97-D) were used for computing the sextic centrifugal distortion constants. The effects related to the size of basis sets and the performances of hybrid approaches, where the harmonic data obtained at higher level of electronic correlation are coupled with cubic force constants yielded by DFT functionals, are presented and discussed. The predicted values were compared to both the available data published in the literature and those obtained by calculations carried out at increasing level of electronic correlation: Hartree-Fock Self Consistent Field (HF-SCF), second order Møller-Plesset perturbation theory (MP2), and coupled-cluster single and double (CCSD) level of theory. Different hybrid approaches, having the cubic force field computed at DFT level of theory coupled to harmonic data computed at increasing level of electronic correlation (up to CCSD level of theory augmented by a perturbational estimate of the effects of connected triple excitations, CCSD(T)) were considered. The obtained results demonstrate that they can represent reliable and computationally affordable methods to predict sextic centrifugal terms with an accuracy almost comparable to that yielded by the more expensive anharmonic force fields fully computed at MP2 and CCSD levels of theory. In view of their reduced computational cost, these hybrid approaches pave the route to the study of more complex systems.

  18. Coupling constant for N*(1535)N{rho}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie Jujun; Graduate University of Chinese Academy of Sciences, Beijing 100049; Wilkin, Colin

    2008-05-15

    The value of the N*(1535)N{rho} coupling constant g{sub N*N{rho}} derived from the N*(1535){yields}N{rho}{yields}N{pi}{pi} decay is compared with that deduced from the radiative decay N*(1535){yields}N{gamma} using the vector-meson-dominance model. On the basis of an effective Lagrangian approach, we show that the values of g{sub N*N{rho}} extracted from the available experimental data on the two decays are consistent, though the error bars are rather large.

  19. Evaluation of the kinetic oxidation of aqueous volatile organic compounds by permanganate.

    PubMed

    Mahmoodlu, Mojtaba G; Hassanizadeh, S Majid; Hartog, Niels

    2014-07-01

    The use of permanganate solutions for in-situ chemical oxidation (ISCO) is a well-established groundwater remediation technology, particularly for targeting chlorinated ethenes. The kinetics of oxidation reactions is an important ISCO remediation design aspect that affects the efficiency and oxidant persistence. The overall rate of the ISCO reaction between oxidant and contaminant is typically described using a second-order kinetic model while the second-order rate constant is determined experimentally by means of a pseudo first order approach. However, earlier studies of chlorinated hydrocarbons have yielded a wide range of values for the second-order rate constants. Also, there is limited insight in the kinetics of permanganate reactions with fuel-derived groundwater contaminants such as toluene and ethanol. In this study, batch experiments were carried out to investigate and compare the oxidation kinetics of aqueous trichloroethylene (TCE), ethanol, and toluene in an aqueous potassium permanganate solution. The overall second-order rate constants were determined directly by fitting a second-order model to the data, instead of typically using the pseudo-first-order approach. The second-order reaction rate constants (M(-1) s(-1)) for TCE, toluene, and ethanol were 8.0×10(-1), 2.5×10(-4), and 6.5×10(-4), respectively. Results showed that the inappropriate use of the pseudo-first-order approach in several previous studies produced biased estimates of the second-order rate constants. In our study, this error was expressed as a function of the extent (P/N) in which the reactant concentrations deviated from the stoichiometric ratio of each oxidation reaction. The error associated with the inappropriate use of the pseudo-first-order approach is negatively correlated with the P/N ratio and reached up to 25% of the estimated second-order rate constant in some previous studies of TCE oxidation. Based on our results, a similar relation is valid for the other volatile organic compounds studied. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. An evaluation of complementary approaches to elucidate fundamental interfacial phenomena driving adhesion of energetic materials

    DOE PAGES

    Hoss, Darby J.; Knepper, Robert; Hotchkiss, Peter J.; ...

    2016-03-23

    In this study, cohesive Hamaker constants of solid materials are measured via optical and dielectric properties (i.e., Lifshitz theory), inverse gas chromatography (IGC), and contact angle measurements. To date, however, a comparison across these measurement techniques for common energetic materials has not been reported. This has been due to the inability of the community to produce samples of energetic materials that are readily compatible with contact angle measurements. Here we overcome this limitation by using physical vapor deposition to produce thin films of five common energetic materials, and the contact angle measurement approach is applied to estimate the cohesive Hamakermore » constants and surface energy components of the materials. The cohesive Hamaker constants range from 85 zJ to 135 zJ across the different films. When these Hamaker constants are compared to prior work using Lifshitz theory and nonpolar probe IGC, the relative magnitudes can be ordered as follows: contact angle > Lifshitz > IGC. Furthermore, the dispersive surface energy components estimated here are in good agreement with those estimated by IGC. Due to these results, researchers and technologists will now have access to a comprehensive database of adhesion constants which describe the behavior of these energetic materials over a range of settings.« less

  1. Random anisotropy model approach on ion beam sputtered Co 20Cu 80 granular alloy

    NASA Astrophysics Data System (ADS)

    Errahmani, H.; Hassanaı̈n, N.; Berrada, A.; Abid, M.; Lassri, H.; Schmerber, G.; Dinia, A.

    2002-03-01

    The Co 20Cu 80 granular film has been elaborated using ion beam sputtering technique. The magnetic properties of the sample were studied in the temperature range 5-300 K at H⩽50 kOe. From the thermomagnetisation curve, which is found to obey to the Bloch law, we have extracted the spin wave stiffness constant D and the exchange constant A. The magnetic experimental results have been interpreted in the framework of random anisotropy model. We have determined the local anisotropy constant KL and the local correlation length of anisotropy axis Ra, which is compared to the experimental grains size obtained by transmission electronic microscopy.

  2. The Changing World of Breast Cancer

    PubMed Central

    Kuhl, Christiane K.

    2015-01-01

    Abstract Compared with other fields of medicine, there is hardly an area that has seen such fast development as the world of breast cancer. Indeed, the way we treat breast cancer has changed fundamentally over the past decades. Breast imaging has always been an integral part of this change, and it undergoes constant adjustment to new ways of thinking. This relates not only to the technical tools we use for diagnosing breast cancer but also to the way diagnostic information is used to guide treatment. There is a constant change of concepts for and attitudes toward breast cancer, and a constant flux of new ideas, new treatment approaches, and new insights into the molecular and biological behavior of this disease. Clinical breast radiologists and even more so, clinician scientists, interested in breast imaging need to keep abreast with this rapidly changing world. Diagnostic or treatment approaches that are considered useful today may be abandoned tomorrow. Approaches that seem irrelevant or far too extravagant today may prove clinically useful and adequate next year. Radiologists must constantly question what they do, and align their clinical aims and research objectives with the changing needs of contemporary breast oncology. Moreover, knowledge about the past helps better understand present debates and controversies. Accordingly, in this article, we provide an overview on the evolution of breast imaging and breast cancer treatment, describe current areas of research, and offer an outlook regarding the years to come. PMID:26083829

  3. Theoretical calculations of structural, electronic, and elastic properties of CdSe1-x Te x : A first principles study

    NASA Astrophysics Data System (ADS)

    M, Shakil; Muhammad, Zafar; Shabbir, Ahmed; Muhammad Raza-ur-rehman, Hashmi; M, A. Choudhary; T, Iqbal

    2016-07-01

    The plane wave pseudo-potential method was used to investigate the structural, electronic, and elastic properties of CdSe1-x Te x in the zinc blende phase. It is observed that the electronic properties are improved considerably by using LDA+U as compared to the LDA approach. The calculated lattice constants and bulk moduli are also comparable to the experimental results. The cohesive energies for pure CdSe and CdTe binary and their mixed alloys are calculated. The second-order elastic constants are also calculated by the Lagrangian theory of elasticity. The elastic properties show that the studied material has a ductile nature.

  4. The Planck-Balance—using a fixed value of the Planck constant to calibrate E1/E2-weights

    NASA Astrophysics Data System (ADS)

    Rothleitner, C.; Schleichert, J.; Rogge, N.; Günther, L.; Vasilyan, S.; Hilbrunner, F.; Knopf, D.; Fröhlich, T.; Härtig, F.

    2018-07-01

    A balance is proposed, which allows the calibration of weights in a continuous range from 1 mg to 1 kg using a fixed value of the Planck constant, h. This so-called Planck-Balance (PB) uses the physical approach of Kibble balances that allow the Planck constant to be derived from the mass. Using the PB no calibrated mass standards are required during weighing processes any longer, because all measurements are traceable via the electrical quantities to the Planck constant, and to the meter and the second. This allows a new approach of balance types after the expected redefinition of the SI-units by the end of 2018. In contrast to many scientific oriented developments, the PB is focused on robust and daily use. Therefore, two balances will be developed, PB2 and PB1, which will allow relative measurement uncertainties comparable to the accuracies of class E2 and E1 weights, respectively, as specified in OIML R 111-1. The balances will be developed in a cooperation of the Physikalisch-Technische Bundesanstalt (PTB) and the Technische Universität Ilmenau in a project funded by the German Federal Ministry of Education and Research.

  5. A Novel Approach for Constructing One-Way Hash Function Based on a Message Block Controlled 8D Hyperchaotic Map

    NASA Astrophysics Data System (ADS)

    Lin, Zhuosheng; Yu, Simin; Lü, Jinhu

    2017-06-01

    In this paper, a novel approach for constructing one-way hash function based on 8D hyperchaotic map is presented. First, two nominal matrices both with constant and variable parameters are adopted for designing 8D discrete-time hyperchaotic systems, respectively. Then each input plaintext message block is transformed into 8 × 8 matrix following the order of left to right and top to bottom, which is used as a control matrix for the switch of the nominal matrix elements both with the constant parameters and with the variable parameters. Through this switching control, a new nominal matrix mixed with the constant and variable parameters is obtained for the 8D hyperchaotic map. Finally, the hash function is constructed with the multiple low 8-bit hyperchaotic system iterative outputs after being rounded down, and its secure analysis results are also given, validating the feasibility and reliability of the proposed approach. Compared with the existing schemes, the main feature of the proposed method is that it has a large number of key parameters with avalanche effect, resulting in the difficulty for estimating or predicting key parameters via various attacks.

  6. Note on the initial conditions within the effective field theory approach of cosmic acceleration

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Wen; Hu, Bin; Zhang, Yi

    2017-12-01

    By using the effective field theory approach, we investigate the role of initial conditions for the dark energy or modified gravity models. In detail, we consider the constant and linear parametrization of the effective Newton constant models. First, under the adiabatic assumption, the correction from the extra scalar degree of freedom in the beyond Λ CDM model is found to be negligible. The dominant ingredient in this setup is the primordial curvature perturbation originated from the inflation mechanism, and the energy budget of the matter components is not very crucial. Second, the isocurvature perturbation sourced by the extra scalar field is studied. For the constant and linear models of the effective Newton constant, no such kind of scalar mode exists. For the quadratic model, there is a nontrivial one. However, the amplitude of the scalar field is damped away very fast on all scales. Consequently, it could not support a reasonable structure formation. Finally, we study the importance of the setup of the scalar field starting time. By setting different turn-on times, namely, a =10-2 and a =10-7, we compare the cosmic microwave background radiation temperature, lensing deflection angle autocorrelation function, and the matter power spectrum in the constant and linear models. We find there is an order of O (1 %) difference in the observable spectra for constant model, while for the linear model, it is smaller than O (0.1 %).

  7. A one-dimensional model for gas-solid heat transfer in pneumatic conveying

    NASA Astrophysics Data System (ADS)

    Smajstrla, Kody Wayne

    A one-dimensional ODE model reduced from a two-fluid model of a higher dimensional order is developed to study dilute, two-phase (air and solid particles) flows with heat transfer in a horizontal pneumatic conveying pipe. Instead of using constant air properties (e.g., density, viscosity, thermal conductivity) evaluated at the initial flow temperature and pressure, this model uses an iteration approach to couple the air properties with flow pressure and temperature. Multiple studies comparing the use of constant or variable air density, viscosity, and thermal conductivity are conducted to study the impact of the changing properties to system performance. The results show that the fully constant property calculation will overestimate the results of the fully variable calculation by 11.4%, while the constant density with variable viscosity and thermal conductivity calculation resulted in an 8.7% overestimation, the constant viscosity with variable density and thermal conductivity overestimated by 2.7%, and the constant thermal conductivity with variable density and viscosity calculation resulted in a 1.2% underestimation. These results demonstrate that gas properties varying with gas temperature can have a significant impact on a conveying system and that the varying density accounts for the majority of that impact. The accuracy of the model is also validated by comparing the simulation results to the experimental values found in the literature.

  8. Batteries for Electric Vehicles

    NASA Technical Reports Server (NTRS)

    Conover, R. A.

    1985-01-01

    Report summarizes results of test on "near-term" electrochemical batteries - (batteries approaching commercial production). Nickel/iron, nickel/zinc, and advanced lead/acid batteries included in tests and compared with conventional lead/acid batteries. Batteries operated in electric vehicles at constant speed and repetitive schedule of accerlerating, coasting, and braking.

  9. Vertical navigation displays : pilot performance and workload during simulated constant-angle-of-descent GPS approaches

    DOT National Transportation Integrated Search

    2000-03-26

    This study compared the effect of alternative graphic or : numeric cockpit display formats on the tactical aspects of : vertical navigation (VNAV). Display formats included: : a) a moving map with altitude range arc, b) the same : format, supplemente...

  10. The 'E' factor -- evolving endodontics.

    PubMed

    Hunter, M J

    2013-03-01

    Endodontics is a constantly developing field, with new instruments, preparation techniques and sealants competing with trusted and traditional approaches to tooth restoration. Thus general dental practitioners must question and understand the significance of these developments before adopting new practices. In view of this, the aim of this article, and the associated presentation at the 2013 British Dental Conference & Exhibition, is to provide an overview of endodontic methods and constantly evolving best practice. The presentation will review current preparation techniques, comparing rotary versus reciprocation, and question current trends in restoration of the endodontically treated tooth.

  11. Estimation of regionalized compositions: A comparison of three methods

    USGS Publications Warehouse

    Pawlowsky, V.; Olea, R.A.; Davis, J.C.

    1995-01-01

    A regionalized composition is a random vector function whose components are positive and sum to a constant at every point of the sampling region. Consequently, the components of a regionalized composition are necessarily spatially correlated. This spatial dependence-induced by the constant sum constraint-is a spurious spatial correlation and may lead to misinterpretations of statistical analyses. Furthermore, the cross-covariance matrices of the regionalized composition are singular, as is the coefficient matrix of the cokriging system of equations. Three methods of performing estimation or prediction of a regionalized composition at unsampled points are discussed: (1) the direct approach of estimating each variable separately; (2) the basis method, which is applicable only when a random function is available that can he regarded as the size of the regionalized composition under study; (3) the logratio approach, using the additive-log-ratio transformation proposed by J. Aitchison, which allows statistical analysis of compositional data. We present a brief theoretical review of these three methods and compare them using compositional data from the Lyons West Oil Field in Kansas (USA). It is shown that, although there are no important numerical differences, the direct approach leads to invalid results, whereas the basis method and the additive-log-ratio approach are comparable. ?? 1995 International Association for Mathematical Geology.

  12. Comparing simple and complex approaches to simulate the impacts of soil water repellency on runoff and erosion in burnt Mediterranean forest slopes

    NASA Astrophysics Data System (ADS)

    Nunes, João Pedro; Catarina Simões Vieira, Diana; Keizer, Jan Jacob

    2017-04-01

    Fires impact soil hydrological properties, enhancing soil water repellency and therefore increasing the potential for surface runoff generation and soil erosion. In consequence, the successful application of hydrological models to post-fire conditions requires the appropriate simulation of the effects of soil water repellency on soil hydrology. This work compared three approaches to model soil water repellency impacts on soil hydrology in burnt eucalypt and pine forest slopes in central Portugal: 1) Daily approach, simulating repellency as a function of soil moisture, and influencing the maximum soil available water holding capacity. It is based on the Thornthwaite-Mather soil water modelling approach, and is parameterized with the soil's wilting point and field capacity, and a parameter relating soil water repellency with water holding capacity. It was tested with soil moisture data from burnt and unburnt hillslopes. This approach was able to simulate post-fire soil moisture patterns, which the model without repellency was unable to do. However, model parameters were different between the burnt and unburnt slopes, indicating that more research is needed to derive standardized parameters from commonly measured soil and vegetation properties. 2) Seasonal approach, pre-determining repellency at the seasonal scale (3 months) in four classes (from none to extreme). It is based on the Morgan-Morgan-Finney (MMF) runoff and erosion model, applied at the seasonal scale and is parameterized with a parameter relating repellency class with field capacity. It was tested with runoff and erosion data from several experimental plots, and led to important improvements on runoff prediction over an approach with constant field capacity for all seasons (calibrated for repellency effects), but only slight improvements in erosion predictions. In contrast with the daily approach, the parameters could be reproduced between different sites 3) Constant approach, specifying values for soil water repellency for the three years after the fire, and keeping them constant throughout the year. It is based on a daily Curve Number (CN) approach, and was incorporated directly in the Soil and Water Assessment Tool (SWAT) model and tested with erosion data from a burnt hillslope. This approach was able to successfully reproduce soil erosion. The results indicate that simplified approaches can be used to adapt existing models for post-fire simulation, taking repellency into account. Taking into account the seasonality of repellency seems more important to simulate surface runoff than erosion, possibly since simulating the larger runoff rates correctly is sufficient for erosion simulation. The constant approach can be applied directly in the parameterization of existing runoff and erosion models for soil loss and sediment yield prediction, while the seasonal approach can readily be developed as a next step, with further work being needed to assess if the approach and associated parameters can be applied in multiple post-fire environments.

  13. Tachyon constant-roll inflation

    NASA Astrophysics Data System (ADS)

    Mohammadi, A.; Saaidi, Kh.; Golanbari, T.

    2018-04-01

    The constant-roll inflation is studied where the inflaton is taken as a tachyon field. Based on this approach, the second slow-roll parameter is taken as a constant which leads to a differential equation for the Hubble parameter. Finding an exact solution for the Hubble parameter is difficult and leads us to a numerical solution for the Hubble parameter. On the other hand, since in this formalism the slow-roll parameter η is constant and could not be assumed to be necessarily small, the perturbation parameters should be reconsidered again which, in turn, results in new terms appearing in the amplitude of scalar perturbations and the scalar spectral index. Utilizing the numerical solution for the Hubble parameter, we estimate the perturbation parameter at the horizon exit time and compare it with observational data. The results show that, for specific values of the constant parameter η , we could have an almost scale-invariant amplitude of scalar perturbations. Finally, the attractor behavior for the solution of the model is presented, and we determine that the feature could be properly satisfied.

  14. Constant diurnal temperature regime alters the impact of simulated climate warming on a tropical pseudoscorpion

    NASA Astrophysics Data System (ADS)

    Zeh, Jeanne A.; Bonilla, Melvin M.; Su, Eleanor J.; Padua, Michael V.; Anderson, Rachel V.; Zeh, David W.

    2014-01-01

    Recent theory suggests that global warming may be catastrophic for tropical ectotherms. Although most studies addressing temperature effects in ectotherms utilize constant temperatures, Jensen's inequality and thermal stress considerations predict that this approach will underestimate warming effects on species experiencing daily temperature fluctuations in nature. Here, we tested this prediction in a neotropical pseudoscorpion. Nymphs were reared in control and high-temperature treatments under a constant daily temperature regime, and results compared to a companion fluctuating-temperature study. At constant temperature, pseudoscorpions outperformed their fluctuating-temperature counterparts. Individuals were larger, developed faster, and males produced more sperm, and females more embryos. The greatest impact of temperature regime involved short-term, adult exposure, with constant temperature mitigating high-temperature effects on reproductive traits. Our findings demonstrate the importance of realistic temperature regimes in climate warming studies, and suggest that exploitation of microhabitats that dampen temperature oscillations may be critical in avoiding extinction as tropical climates warm.

  15. Fining of Red Wine Monitored by Multiple Light Scattering.

    PubMed

    Ferrentino, Giovanna; Ramezani, Mohsen; Morozova, Ksenia; Hafner, Daniela; Pedri, Ulrich; Pixner, Konrad; Scampicchio, Matteo

    2017-07-12

    This work describes a new approach based on multiple light scattering to study red wine clarification processes. The whole spectral signal (1933 backscattering points along the length of each sample vial) were fitted by a multivariate kinetic model that was built with a three-step mechanism, implying (1) adsorption of wine colloids to fining agents, (2) aggregation into larger particles, and (3) sedimentation. Each step is characterized by a reaction rate constant. According to the first reaction, the results showed that gelatin was the most efficient fining agent, concerning the main objective, which was the clarification of the wine, and consequently the increase in its limpidity. Such a trend was also discussed in relation to the results achieved by nephelometry, total phenols, ζ-potential, color, sensory, and electronic nose analyses. Also, higher concentrations of the fining agent (from 5 to 30 g/100 L) or higher temperatures (from 10 to 20 °C) sped up the process. Finally, the advantage of using the whole spectral signal vs classical univariate approaches was demonstrated by comparing the uncertainty associated with the rate constants of the proposed kinetic model. Overall, multiple light scattering technique showed a great potential for studying fining processes compared to classical univariate approaches.

  16. Microwave characterization of slotline on high resistivity silicon for antenna feed network

    NASA Technical Reports Server (NTRS)

    Simons, Rainee N.; Taub, Susan R.; Lee, Richard Q.; Young, Paul G.

    1993-01-01

    Conventional silicon wafers have low resistivity and consequently unacceptably high value of dielectric attenuation constant. Microwave circuits for phased array antenna systems fabricated on these wafers therefore have low efficiency. By choosing a silicon substrate with sufficiently high resistivity it is possible to make the dielectric attenuation constant of the interconnecting microwave transmission lines approach those of GaAs or InP. In order for this to be possible, the transmission lines must be characterized. In this presentation, the effective dielectric constant (epsilon sub eff) and attenuation constant (alpha) of a slotline on high resistivity (5000 to 10 000 ohm-cm) silicon wafer will be discussed. The epsilon sub eff and alpha are determined from the measured resonant frequencies and the corresponding insertion loss of a slotline ring resonator. The results for slotline will be compared with microstrip line and coplanar waveguide.

  17. Radiation induced dissolution of UO 2 based nuclear fuel - A critical review of predictive modelling approaches

    NASA Astrophysics Data System (ADS)

    Eriksen, Trygve E.; Shoesmith, David W.; Jonsson, Mats

    2012-01-01

    Radiation induced dissolution of uranium dioxide (UO 2) nuclear fuel and the consequent release of radionuclides to intruding groundwater are key-processes in the safety analysis of future deep geological repositories for spent nuclear fuel. For several decades, these processes have been studied experimentally using both spent fuel and various types of simulated spent fuels. The latter have been employed since it is difficult to draw mechanistic conclusions from real spent nuclear fuel experiments. Several predictive modelling approaches have been developed over the last two decades. These models are largely based on experimental observations. In this work we have performed a critical review of the modelling approaches developed based on the large body of chemical and electrochemical experimental data. The main conclusions are: (1) the use of measured interfacial rate constants give results in generally good agreement with experimental results compared to simulations where homogeneous rate constants are used; (2) the use of spatial dose rate distributions is particularly important when simulating the behaviour over short time periods; and (3) the steady-state approach (the rate of oxidant consumption is equal to the rate of oxidant production) provides a simple but fairly accurate alternative, but errors in the reaction mechanism and in the kinetic parameters used may not be revealed by simple benchmarking. It is essential to use experimentally determined rate constants and verified reaction mechanisms, irrespective of whether the approach is chemical or electrochemical.

  18. Metal–organic complexation in the marine environment

    PubMed Central

    Luther, George W; Rozan, Timothy F; Witter, Amy; Lewis, Brent

    2001-01-01

    We discuss the voltammetric methods that are used to assess metal–organic complexation in seawater. These consist of titration methods using anodic stripping voltammetry (ASV) and cathodic stripping voltammetry competitive ligand experiments (CSV-CLE). These approaches and a kinetic approach using CSV-CLE give similar information on the amount of excess ligand to metal in a sample and the conditional metal ligand stability constant for the excess ligand bound to the metal. CSV-CLE data using different ligands to measure Fe(III) organic complexes are similar. All these methods give conditional stability constants for which the side reaction coefficient for the metal can be corrected but not that for the ligand. Another approach, pseudovoltammetry, provides information on the actual metal–ligand complex(es) in a sample by doing ASV experiments where the deposition potential is varied more negatively in order to destroy the metal–ligand complex. This latter approach gives concentration information on each actual ligand bound to the metal as well as the thermodynamic stability constant of each complex in solution when compared to known metal–ligand complexes. In this case the side reaction coefficients for the metal and ligand are corrected. Thus, this method may not give identical information to the titration methods because the excess ligand in the sample may not be identical to some of the actual ligands binding the metal in the sample. PMID:16759421

  19. Mormon Clients' Experiences of Conversion Therapy: The Need for a New Treatment Approach

    ERIC Educational Resources Information Center

    Beckstead, A. Lee; Morrow, Susan L.

    2004-01-01

    Perspectives were gathered of 50 Mormon individuals who had undergone counseling to change their sexual orientation. The data were analyzed using the constant comparative method and participant verification, thereby developing a grounded theory. A model emerged that depicted participants' intrapersonal and interpersonal motivations for seeking…

  20. Intergenerational Challenges in Australian Jewish School Education

    ERIC Educational Resources Information Center

    Gross, Zehavit; Rutland, Suzanne D.

    2014-01-01

    The aim of this research is to investigate the intergenerational changes that have occurred in Australian Jewish day schools and the challenges these pose for religious and Jewish education. Using a grounded theory approach according to the constant comparative method (Strauss 1987), data from three sources (interviews [296], observations [27],…

  1. What Physicians Reason about during Admission Case Review

    ERIC Educational Resources Information Center

    Juma, Salina; Goldszmidt, Mark

    2017-01-01

    Research suggests that physicians perform multiple reasoning tasks beyond diagnosis during patient review. However, these remain largely theoretical. The purpose of this study was to explore reasoning tasks in clinical practice during patient admission review. The authors used a constant comparative approach--an iterative and inductive process of…

  2. Downsizings, Mergers, and Acquisitions: Perspectives of Human Resource Development Practitioners

    ERIC Educational Resources Information Center

    Shook, LaVerne; Roth, Gene

    2011-01-01

    Purpose: This paper seeks to provide perspectives of HR practitioners based on their experiences with mergers, acquisitions, and/or downsizings. Design/methodology/approach: This qualitative study utilized interviews with 13 HR practitioners. Data were analyzed using a constant comparative method. Findings: HR practitioners were not involved in…

  3. Riding the Wave of BYOD: Developing a Framework for Creative Pedagogies

    ERIC Educational Resources Information Center

    Cochrane, Thomas; Antonczak, Laurent; Keegan, Helen; Narayan, Vickel

    2014-01-01

    Moving innovation in teaching and learning beyond isolated short-term projects is one of the holy grails of educational technology research, which is littered with the debris of a constant stream of comparative studies demonstrating no significant difference between innovative technologies and traditional pedagogical approaches. Meanwhile, the…

  4. Mainstream Early Childhood Education Teacher Preparation for Inclusion in Zimbabwe

    ERIC Educational Resources Information Center

    Majoko, Tawanda

    2017-01-01

    This study examined mainstream teachers' preparation for inclusion in Early Childhood Education (ECE). Embedded within the "core expertise" of inclusive pedagogy, this descriptive study drew on a sample of 23 mainstream teachers purposively drawn from the Midlands educational province of Zimbabwe. A constant comparative approach of…

  5. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  6. Metal adsorption onto bacterial surfaces: development of a predictive approach

    NASA Astrophysics Data System (ADS)

    Fein, Jeremy B.; Martin, Aaron M.; Wightman, Peter G.

    2001-12-01

    Aqueous metal cation adsorption onto bacterial surfaces can be successfully modeled by means of a surface complexation approach. However, relatively few stability constants for metal-bacterial surface complexes have been measured. In order to determine the bacterial adsorption behavior of cations that have not been studied in the laboratory, predictive techniques are required that enable estimation of the stability constants of bacterial surface complexes. In this study, we use a linear free-energy approach to compare previously measured stability constants for Bacillus subtilis metal-carboxyl surface complexes with aqueous metal-organic acid anion stability constants. The organic acids that we consider are acetic, oxalic, citric, and tiron. We add to this limited data set by conducting metal adsorption experiments onto Bacillus subtilis, determining bacterial surface stability constants for Co, Nd, Ni, Sr, and Zn. The adsorption behavior of each of the metals studied here was described well by considering metal-carboxyl bacterial surface complexation only, except for the Zn adsorption behavior, which required carboxyl and phosphoryl complexation to obtain a suitable fit to the data. The best correlation between bacterial carboxyl surface complexes and aqueous organic acid anion stability constants was obtained by means of metal-acetate aqueous complexes, with a linear correlation coefficient of 0.97. This correlation applies only to unhydrolyzed aqueous cations and only to carboxyl binding of those cations, and it does not predict the binding behavior under conditions where metal binding to other bacterial surface site types occurs. However, the relationship derived in this study permits estimation of the carboxyl site adsorption behavior of a wide range of aqueous metal cations for which there is an absence of experimental data. This technique, coupled with the observation of similar adsorption behaviors across bacterial species (Yee and Fein, 2001), enables estimation of the effects of bacterial adsorption on metal mobilities for a large number of environmental and geologic applications.

  7. A Rayleighian approach for modeling kinetics of ionic transport in polymeric media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Rajeev

    2017-02-14

    Here, we report a theoretical approach for analyzing impedance of ionic liquids (ILs) and charged polymers such as polymerized ionic liquids (PolyILs) within linear response. The approach is based on the Rayleigh dissipation function formalism, which provides a computational framework for a systematic study of various factors, including polymer dynamics, in affecting the impedance. We present an analytical expression for the impedance within linear response by constructing a one-dimensional model for ionic transport in ILs/PolyILs. This expression is used to extract mutual diffusion constants, the length scale of mutual diffusion, and thicknesses of a low-dielectric layer on the electrodes frommore » the broadband dielectric spectroscopy (BDS) measurements done for an IL and three PolyILs. Also, static dielectric permittivities of the IL and the PolyILs are determined. The extracted mutual diffusion constants are compared with the self diffusion constants of ions measured using pulse field gradient (PFG) fluorine nuclear magnetic resonance (NMR). For the first time, excellent agreements between the diffusivities extracted from the Electrode Polarization spectra (EPS) of IL/PolyILs and those measured using the PFG-NMR are found, which allows the use of the EPS and the PFG-NMR techniques in a complimentary manner for a general understanding of the ionic transport.« less

  8. Exploratory studies of new avenues to achieve high electromechanical response and high dielectric constant in polymeric materials

    NASA Astrophysics Data System (ADS)

    Huang, Cheng

    High performance soft electronic materials are key elements in advanced electronic devices for broad range applications including capacitors, actuators, artificial muscles and organs, smart materials and structures, microelectromechanical (MEMS) and microfluidic devices, acoustic devices and sensors. This thesis exploits new approaches to improve the electromechanical response and dielectric response of these materials. By making use of novel material phenomena such as large anisotropy in dipolar response in liquid crystals (LCs) and all-organic composites in which high dielectric constant organic solids and conductive polymers are either physically blended into or chemically grafted to a polymer matrix, we demonstrate that high dielectric constant and high electromechanical conversion efficiency comparable to that in ceramic materials can be achieved. Nano-composite approach can also be utilized to improve the performance of the electronic electroactive polymers (EAPs) and composites, for example, exchange coupling between the fillers and matrix with very large dielectric contrast can lead to significantly enhance the dielectric response as well as electromechanical response when the heterogeneity size of the composite is comparable to the exchange length. In addition to the dielectric composites, in which high dielectric constant fillers raise the dielectric constant of composites, conductive percolation can also lead to high dielectric constant in polymeric materials. An all-polymer percolative composite is introduced which exhibits very high dielectric constant (>7,000). The flexible all-polymer composites with a high dielectric constant make it possible to induce a high electromechanical response under a much reduced electric field in the field effect electroactive polymer (EAP) actuators (a strain of 2.65% with an elastic energy density of 0.18 J/cm3 can be achieved under a field of 16 V/mum). Agglomeration of the particles can also be effectively prevented by in situ preparation. High dielectric constant copper phthalocyanine oligomer and conductive polyaniline oligomer were successfully bonded to polyurethane backbone to form fully functionalized nano-phase polymers. Improvement of dispersibility of oligomers in polymer matrix makes the system self-organize the nanocomposites possessing oligomer nanophase (below 30nm) within the fully functionalized polymers. The resulting nanophase polymers significantly enhance the interface effect, which through the exchange coupling raises the dielectric response markedly above that expected from simple mixing rules for dielectric composites. Consequently, these nano-phase polymers offer a high dielectric constant (a dielectric constant near 1,000 at 20 Hz), improve the breakdown field and mechanical properties, and exhibit high electromechanical response. A longitudinal strain of more than -14% can be induced under a much reduced field, 23 V/mum, with an elastic energy density of higher than 1 J/cm3. The elastic modulus is as high as 100MPa, and a transverse strain is 7% under the same field. (Abstract shortened by UMI.)

  9. Acceleration techniques in the univariate Lipschitz global optimization

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela

    2016-10-01

    Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.

  10. A Song to Remember: Emerging Adults Recall Memorable Music

    ERIC Educational Resources Information Center

    Lippman, Julia R.; Greenwood, Dara N.

    2012-01-01

    The present study employs a mixed methods approach to understanding the psychological functions and contexts of music use. Seventy-six emerging adults selected a single piece of music that they considered personally significant and elaborated on the reasons for this significance in response to written prompts. A constant comparative analysis of…

  11. Exploring How Secondary Pre-Service Teachers' Use Online Social Bookmarking to Envision Literacy in the Disciplines

    ERIC Educational Resources Information Center

    Colwell, Jamie; Gregory, Kristen

    2016-01-01

    This study considers how pre-service teachers envision disciplinary literacy through an online social bookmarking project. Thirty secondary pre-service teachers participated in the project through an undergraduate literacy course. Online bookmarks and post-project reflections were collected and analyzed using a constant comparative approach to…

  12. Determining the critical relative humidity at which the glassy to rubbery transition occurs in polydextrose using an automatic water vapor sorption instrument.

    PubMed

    Yuan, Xiaoda; Carter, Brady P; Schmidt, Shelly J

    2011-01-01

    Similar to an increase in temperature at constant moisture content, water vapor sorption by an amorphous glassy material at constant temperature causes the material to transition into the rubbery state. However, comparatively little research has investigated the measurement of the critical relative humidity (RHc) at which the glass transition occurs at constant temperature. Thus, the central objective of this study was to investigate the relationship between the glass transition temperature (Tg), determined using thermal methods, and the RHc obtained using an automatic water vapor sorption instrument. Dynamic dewpoint isotherms were obtained for amorphous polydextrose from 15 to 40 °C. RHc was determined using an optimized 2nd-derivative method; however, 2 simpler RHc determination methods were also tested as a secondary objective. No statistical difference was found between the 3 RHc methods. Differential scanning calorimetry (DSC) Tg values were determined using polydextrose equilibrated from 11.3% to 57.6% RH. Both standard DSC and modulated DSC (MDSC) methods were employed, since some of the polydextrose thermograms exhibited a physical aging peak. Thus, a tertiary objective was to compare Tg values obtained using 3 different methods (DSC first scan, DSC rescan, and MDSC), to determine which method(s) yielded the most accurate Tg values. In general, onset and midpoint DSC first scan and MDSC Tg values were similar, whereas onset and midpoint DSC rescan values were different. State diagrams of RHc and experimental temperature and Tg and %RH were compared. These state diagrams, though obtained via very different methods, showed relatively good agreement, confirming our hypothesis that water vapor sorption isotherms can be used to directly detect the glassy to rubbery transition. Practical Application: The food polymer science (FPS) approach, pioneered by Slade and Levine, is being successfully applied in the food industry for understanding, improving, and developing food processes and products. However, despite its extreme usefulness, the Tg, a key element of the FPS approach, remains a challenging parameter to routinely measure in amorphous food materials, especially complex materials. This research demonstrates that RHc values, obtained at constant temperature using an automatic water vapor sorption instrument, can be used to detect the glassy to rubbery transition and are similar to the Tg values obtained at constant %RH, especially considering the very different approaches of these 2 methods--a transition from surface adsorption to bulk absorption (water vapor sorption) versus a step change in the heat capacity (DSC thermal method).

  13. An embedded mesh method using piecewise constant multipliers with stabilization: mathematical and numerical aspects

    DOE PAGES

    Puso, M. A.; Kokko, E.; Settgast, R.; ...

    2014-10-22

    An embedded mesh method using piecewise constant multipliers originally proposed by Puso et al. (CMAME, 2012) is analyzed here to determine effects of the pressure stabilization term and small cut cells. The approach is implemented for transient dynamics using the central difference scheme for the time discretization. It is shown that the resulting equations of motion are a stable linear system with a condition number independent of mesh size. Furthermore, we show that the constraints and the stabilization terms can be recast as non-proportional damping such that the time integration of the scheme is provably stable with a critical timemore » step computed from the undamped equations of motion. Effects of small cuts are discussed throughout the presentation. A mesh study is conducted to evaluate the effects of the stabilization on the discretization error and conditioning and is used to recommend an optimal value for stabilization scaling parameter. Several nonlinear problems are also analyzed and compared with comparable conforming mesh results. Finally, we show several demanding problems highlighting the robustness of the proposed approach.« less

  14. Is this the right normalization? A diagnostic tool for ChIP-seq normalization.

    PubMed

    Angelini, Claudia; Heller, Ruth; Volkinshtein, Rita; Yekutieli, Daniel

    2015-05-09

    Chip-seq experiments are becoming a standard approach for genome-wide profiling protein-DNA interactions, such as detecting transcription factor binding sites, histone modification marks and RNA Polymerase II occupancy. However, when comparing a ChIP sample versus a control sample, such as Input DNA, normalization procedures have to be applied in order to remove experimental source of biases. Despite the substantial impact that the choice of the normalization method can have on the results of a ChIP-seq data analysis, their assessment is not fully explored in the literature. In particular, there are no diagnostic tools that show whether the applied normalization is indeed appropriate for the data being analyzed. In this work we propose a novel diagnostic tool to examine the appropriateness of the estimated normalization procedure. By plotting the empirical densities of log relative risks in bins of equal read count, along with the estimated normalization constant, after logarithmic transformation, the researcher is able to assess the appropriateness of the estimated normalization constant. We use the diagnostic plot to evaluate the appropriateness of the estimates obtained by CisGenome, NCIS and CCAT on several real data examples. Moreover, we show the impact that the choice of the normalization constant can have on standard tools for peak calling such as MACS or SICER. Finally, we propose a novel procedure for controlling the FDR using sample swapping. This procedure makes use of the estimated normalization constant in order to gain power over the naive choice of constant (used in MACS and SICER), which is the ratio of the total number of reads in the ChIP and Input samples. Linear normalization approaches aim to estimate a scale factor, r, to adjust for different sequencing depths when comparing ChIP versus Input samples. The estimated scaling factor can easily be incorporated in many peak caller algorithms to improve the accuracy of the peak identification. The diagnostic plot proposed in this paper can be used to assess how adequate ChIP/Input normalization constants are, and thus it allows the user to choose the most adequate estimate for the analysis.

  15. A preliminary study of the benefits of flying by ground speed during final approach

    NASA Technical Reports Server (NTRS)

    Hastings, E. C., Jr.

    1978-01-01

    A study was conducted to evaluate the benefits of an approach technique which utilized constant ground speed on approach. It was determined that the technique reduced the capacity losses in headwinds experienced with the currently used constant airspeed technique. The benefits of technique were found to increase as headwinds increased and as the wake avoidance separation intervals were reduced. An additional benefit noted for the constant ground speed technique was a reduction in stopping distance variance due to the approach wind environment.

  16. Methane steam reforming rates over Pt, Rh and Ni(111) accounting for H tunneling and for metal lattice vibrations

    NASA Astrophysics Data System (ADS)

    German, Ernst D.; Sheintuch, Moshe

    2017-02-01

    Microkinetic models of methane steam reforming (MSR) over bare platinum and rhodium (111) surfaces are analyzed in present work using calculated rate constants. The individual rate constants are classified into three different sets: (i) rate constants of adsorption and desorption steps of CH4, H2O, CO and of H2; (ii) rate constants of dissociation and formation of A-H bonds (A = C, O, and H), and (iii) rate constants of dissociation and formation of C-O bond. The rate constants of sets (i) and (iii) are calculated using transition state theory and published thermochemical data. The rate constants of H-dissociation reactions (set (ii)) are calculated in terms of a previously-developed approach that accounts for thermal metal lattice vibrations and for H tunneling through a potential barrier of height which depends on distance of AH from a surface. Pre-exponential factors of several group (ii) steps were calculated to be usually lower than the traditional kBT/h due to tunneling effect. Surface composition and overall MSR rates over platinum and rhodium surfaces are compared with those over nickel surface showing that operating conditions strongly affect on the activity order of the catalysts.

  17. Nuclear shielding constants by density functional theory with gauge including atomic orbitals

    NASA Astrophysics Data System (ADS)

    Helgaker, Trygve; Wilson, Philip J.; Amos, Roger D.; Handy, Nicholas C.

    2000-08-01

    Recently, we introduced a new density-functional theory (DFT) approach for the calculation of NMR shielding constants. First, a hybrid DFT calculation (using 5% exact exchange) is performed on the molecule to determine Kohn-Sham orbitals and their energies; second, the constants are determined as in nonhybrid DFT theory, that is, the paramagnetic contribution to the constants is calculated from a noniterative, uncoupled sum-over-states expression. The initial results suggested that this semiempirical DFT approach gives shielding constants in good agreement with the best ab initio and experimental data; in this paper, we further validate this procedure, using London orbitals in the theory, having implemented DFT into the ab initio code DALTON. Calculations on a number of small and medium-sized molecules confirm that our approach produces shieldings in excellent agreement with experiment and the best ab initio results available, demonstrating its potential for the study of shielding constants of large systems.

  18. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Utilizing Calibrated GPS Reflected Signals to Estimate Soil Reflectivity and Dielectric Constant: Results from SMEX02

    NASA Technical Reports Server (NTRS)

    Katzberg, Stephen J.; Torres, Omar; Grant, Michael S.; Masters, Dallas

    2006-01-01

    Extensive reflected GPS data was collected using a GPS reflectometer installed on an HC130 aircraft during the Soil Moisture Experiment 2002 (SMEX02) near Ames, Iowa. At the same time, widespread surface truth data was acquired in the form of point soil moisture profiles, areal sampling of near-surface soil moisture, total green biomass and precipitation history, among others. Previously, there have been no reported efforts to calibrate reflected GPS data sets acquired over land. This paper reports the results of two approaches to calibration of the data that yield consistent results. It is shown that estimating the strength of the reflected signals by either (1) assuming an approximately specular surface reflection or (2) inferring the surface slope probability density and associated normalization constants give essentially the same results for the conditions encountered in SMEX02. The corrected data is converted to surface reflectivity and then to dielectric constant as a test of the calibration approaches. Utilizing the extensive in-situ soil moisture related data this paper also presents the results of comparing the GPS-inferred relative dielectric constant with the Wang-Schmugge model frequently used to relate volume moisture content to dielectric constant. It is shown that the calibrated GPS reflectivity estimates follow the expected dependence of permittivity with volume moisture, but with the following qualification: The soil moisture value governing the reflectivity appears to come from only the top 1-2 centimeters of soil, a result consistent with results found for other microwave techniques operating at L-band. Nevertheless, the experimentally derived dielectric constant is generally lower than predicted. Possible explanations are presented to explain this result.

  20. Fast Shepard interpolation on graphics processing units: potential energy surfaces and dynamics for H + CH4 → H2 + CH3.

    PubMed

    Welsch, Ralph; Manthe, Uwe

    2013-04-28

    A strategy for the fast evaluation of Shepard interpolated potential energy surfaces (PESs) utilizing graphics processing units (GPUs) is presented. Speed ups of several orders of magnitude are gained for the title reaction on the ZFWCZ PES [Y. Zhou, B. Fu, C. Wang, M. A. Collins, and D. H. Zhang, J. Chem. Phys. 134, 064323 (2011)]. Thermal rate constants are calculated employing the quantum transition state concept and the multi-layer multi-configurational time-dependent Hartree approach. Results for the ZFWCZ PES are compared to rate constants obtained for other ab initio PESs and problems are discussed. A revised PES is presented. Thermal rate constants obtained for the revised PES indicate that an accurate description of the anharmonicity around the transition state is crucial.

  1. Electromagnetic scattering from two-dimensional thick material junctions

    NASA Technical Reports Server (NTRS)

    Ricoy, M. A.; Volakis, John L.

    1990-01-01

    The problem of the plane wave diffraction is examined by an arbitrary symmetric two dimensional junction, where Generalized Impedance Boundary Conditions (GIBCs) and Generalized Sheet Transition Conditions (GSTCs) are employed to simulate the slabs. GIBCs and GSTCs are constructed for multilayer planar slabs of arbitrary thickness and the resulting GIBC/GSTC reflection coefficients are compared with exact counterparts to evaluate the GIBCs/GSTCs. The plane wave diffraction by a multilayer material slab recessed in a perfectly conducting ground plane is formulated and solved via the Generalized Scattering Matrix Formulation (GDMF) in conjunction with the dual integral equation approach. Various scattering patterns are computed and validated with exact results where possible. The diffraction by a material discontinuity in a thick dielectric/ferrite slab is considered by modelling the constituent slabs with GSTCs. A non-unique solution in terms of unknown constants is obtained, and these constants are evaluated for the recessed slab geometry by comparison with the solution obtained therein. Several other simplified cases are also presented and discussed. An eigenfunction expansion method is introduced to determine the unknown solution constants in the general case. This procedure is applied to the non-unique solution in terms of unknown constants; and scattering patterns are presented for various slab junctions and compared with alternative results where possible.

  2. Evolution in totally constrained models: Schrödinger vs. Heisenberg pictures

    NASA Astrophysics Data System (ADS)

    Olmedo, Javier

    2016-06-01

    We study the relation between two evolution pictures that are currently considered for totally constrained theories. Both descriptions are based on Rovelli’s evolving constants approach, where one identifies a (possibly local) degree of freedom of the system as an internal time. This method is well understood classically in several situations. The purpose of this paper is to further analyze this approach at the quantum level. Concretely, we will compare the (Schrödinger-like) picture where the physical states evolve in time with the (Heisenberg-like) picture in which one defines parametrized observables (or evolving constants of the motion). We will show that in the particular situations considered in this paper (the parametrized relativistic particle and a spatially flat homogeneous and isotropic spacetime coupled to a massless scalar field) both descriptions are equivalent. We will finally comment on possible issues and on the genericness of the equivalence between both pictures.

  3. Core-core and core-valence correlation

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1988-01-01

    The effect of (1s) core correlation on properties and energy separations was analyzed using full configuration-interaction (FCI) calculations. The Be 1 S - 1 P, the C 3 P - 5 S and CH+ 1 Sigma + or - 1 Pi separations, and CH+ spectroscopic constants, dipole moment and 1 Sigma + - 1 Pi transition dipole moment were studied. The results of the FCI calculations are compared to those obtained using approximate methods. In addition, the generation of atomic natural orbital (ANO) basis sets, as a method for contracting a primitive basis set for both valence and core correlation, is discussed. When both core-core and core-valence correlation are included in the calculation, no suitable truncated CI approach consistently reproduces the FCI, and contraction of the basis set is very difficult. If the (nearly constant) core-core correlation is eliminated, and only the core-valence correlation is included, CASSCF/MRCI approached reproduce the FCI results and basis set contraction is significantly easier.

  4. Physically founded phonon dispersions of few-layer materials and the case of borophene

    DOE PAGES

    Carrete, Jesús; Li, Wu; Lindsay, Lucas; ...

    2016-04-21

    By building physically sound interatomic force constants,we offer evidence of the universal presence of a quadratic phonon branch in all unstrained 2D materials, thus contradicting much of the existing literature. Through a reformulation of the interatomic force constants (IFCs) in terms of internal coordinates, we find that a delicate balance between the IFCs is responsible for this quadraticity. We use this approach to predict the thermal conductivity of Pmmn borophene, which is comparable to that of MoS 2, and displays a remarkable in-plane anisotropy. Ultimately, these qualities may enable the efficient heat management of borophene devices in potential nanoelectronic applications

  5. A Cross-National Comparison of Art Curricula for Kindergarten-Aged Children

    ERIC Educational Resources Information Center

    Kim, Heejin; Kim, Hajin

    2017-01-01

    The aim of this research is to make a cross-national comparison of art curricula for kindergarten-aged children across five countries--Korea, Norway, New Zealand, Slovakia and Singapore. A document analysis was conducted on the five curricula using a constant comparative approach for selected qualitative statements to analyse two major constructs:…

  6. A transmission line model for propagation in elliptical core optical fibers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Georgantzos, E.; Boucouvalas, A. C.; Papageorgiou, C.

    The calculation of mode propagation constants of elliptical core fibers has been the purpose of extended research leading to many notable methods, with the classic step index solution based on Mathieu functions. This paper seeks to derive a new innovative method for the determination of mode propagation constants in single mode fibers with elliptic core by modeling the elliptical fiber as a series of connected coupled transmission line elements. We develop a matrix formulation of the transmission line and the resonance of the circuits is used to calculate the mode propagation constants. The technique, used with success in the casemore » of cylindrical fibers, is now being extended for the case of fibers with elliptical cross section. The advantage of this approach is that it is very well suited to be able to calculate the mode dispersion of arbitrary refractive index profile elliptical waveguides. The analysis begins with the deployment Maxwell’s equations adjusted for elliptical coordinates. Further algebraic analysis leads to a set of equations where we are faced with the appearance of harmonics. Taking into consideration predefined fixed number of harmonics simplifies the problem and enables the use of the resonant circuits approach. According to each case, programs have been created in Matlab, providing with a series of results (mode propagation constants) that are further compared with corresponding results from the ready known Mathieu functions method.« less

  7. Linear ordinary differential equations with constant coefficients. Revisiting the impulsive response method using factorization

    NASA Astrophysics Data System (ADS)

    Camporesi, Roberto

    2011-06-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and the variation of constants method. The approach presented here can be used in a first course on differential equations for science and engineering majors.

  8. Absolute Calibration of Si iRMs used for Si Paleo-nutrient proxies

    NASA Astrophysics Data System (ADS)

    Vocke, Robert; Rabb, Savelas

    2016-04-01

    The Avogadro Project is an ongoing international effort, coordinated by the International Bureau of Weights and Measures (BIPM) and the International Avogadro Coordination (IAC) to redefine the SI unit mole in terms of the Avogadro constant and the SI unit kg in terms of the Planck constant. One of the outgrowths of this effort has been the development of a novel, precise and highly accurate method to measure calibrated (absolute) isotopic ratios that are traceable to the SI (Vocke et al., 2014 Metrologia 51, 361, Azuma et al., 2015 Metrologia 52 360). This approach has also been able to produce absolute Si isotope ratio data with lower levels of uncertainty when compared to the traditional "Atomic Weights" method of absolute isotope ratio measurement. Silicon isotope variations (reported as delta(Si30)and delta(Si29)) in silicic acid dissolved in ocean waters, in biogenic silica and in diatoms are extremely informative paleo-nutrient proxies. The utility and comparability of such measurements however depends on calibration with artifact isotopic Reference Materials (iRMs). We will be reporting new measurements on the iRMs NBS-28 (RM 8546 - Silica Sand), Diatomite, Big Batch and SRM 990 using the Avogadro measurement approach, comparing them with prior assessments of these iRMs.

  9. Communication: Prediction of the rate constant of bimolecular hydrogen exchange in the water dimer using an ab initio potential energy surface.

    PubMed

    Wang, Yimin; Bowman, Joel M; Huang, Xinchuan

    2010-09-21

    We report the properties of two novel transition states of the bimolecular hydrogen exchange reaction in the water dimer, based on an ab initio water dimer potential [A. Shank et al., J. Chem. Phys. 130, 144314 (2009)]. The realism of the two transition states is assessed by comparing structures, energies, and harmonic frequencies obtained from the potential energy surface and new high-level ab initio calculations. The rate constant for the exchange is obtained using conventional transition state theory with a tunneling correction. We employ a one-dimensional approach for the tunneling calculations using a relaxed potential from the full-dimensional potential in the imaginary-frequency normal mode of the saddle point, Q(im). The accuracy of this one-dimensional approach has been shown for the ground-state tunneling splittings for H and D-transfer in malonaldehyde and for the D+H(2) reaction [Y. Wang and J. M. Bowman, J. Chem. Phys. 129, 121103 (2008)]. This approach is applied to calculate the rate constant for the H(2)O+H(2)O exchange and also for H(2)O+D(2)O→2HOD. The local zero-point energy is also obtained using diffusion Monte Carlo calculations in the space of real-frequency-saddle-point normal modes, as a function of Q(im).

  10. Theoretical and Numerical Approaches for Determining the Reflection and Transmission Coefficients of OPEFB-PCL Composites at X-Band Frequencies

    PubMed Central

    Ahmad, Ahmad F.; Abbas, Zulkifly; Obaiys, Suzan J.; Ibrahim, Norazowa; Hashim, Mansor; Khaleel, Haider

    2015-01-01

    Bio-composites of oil palm empty fruit bunch (OPEFB) fibres and polycaprolactones (PCL) with a thickness of 1 mm were prepared and characterized. The composites produced from these materials are low in density, inexpensive, environmentally friendly, and possess good dielectric characteristics. The magnitudes of the reflection and transmission coefficients of OPEFB fibre-reinforced PCL composites with different percentages of filler were measured using a rectangular waveguide in conjunction with a microwave vector network analyzer (VNA) in the X-band frequency range. In contrast to the effective medium theory, which states that polymer-based composites with a high dielectric constant can be obtained by doping a filler with a high dielectric constant into a host material with a low dielectric constant, this paper demonstrates that the use of a low filler percentage (12.2%OPEFB) and a high matrix percentage (87.8%PCL) provides excellent results for the dielectric constant and loss factor, whereas 63.8% filler material with 36.2% host material results in lower values for both the dielectric constant and loss factor. The open-ended probe technique (OEC), connected with the Agilent vector network analyzer (VNA), is used to determine the dielectric properties of the materials under investigation. The comparative approach indicates that the mean relative error of FEM is smaller than that of NRW in terms of the corresponding S21 magnitude. The present calculation of the matrix/filler percentages endorses the exact amounts of substrate utilized in various physics applications. PMID:26474301

  11. Nonassociative plasticity model for cohesionless materials and its implementation in soil-structure interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hashmi, Q.S.E.

    A constitutive model based on rate-independent elastoplasticity concepts is developed and used to simulate the behavior of geologic materials under arbitrary three-dimensional stress paths. The model accounts for various factors such as friction, stress path, and stress history that influence the behavior of geologic materials. A hierarchical approach is adopted whereby models of progressively increasing sophistication are developed from a basic isotropic-hardening associate model. Nonassociativeness is introduced as correction or perturbation to the basic model. Deviation of normality of the plastic-strain increments to the yield surface F is captured through nonassociativeness. The plastic potential Q is obtained by applying amore » correction to F. This simplified approach restricts the number of extra parameters required to define the plastic potential Q. The material constants associated with the model are identified, and they are evaluated for three different sands (Leighton Buzzard, Munich and McCormick Ranch). The model is then verified by comparing predictions with laboratory tests from which the constants were found, and typical tests not used for finding the constants. Based on the above findings, a soil-footing system is analyzed using finite-element techniques.« less

  12. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  13. Scene-based nonuniformity correction using local constant statistics.

    PubMed

    Zhang, Chao; Zhao, Wenyi

    2008-06-01

    In scene-based nonuniformity correction, the statistical approach assumes all possible values of the true-scene pixel are seen at each pixel location. This global-constant-statistics assumption does not distinguish fixed pattern noise from spatial variations in the average image. This often causes the "ghosting" artifacts in the corrected images since the existing spatial variations are treated as noises. We introduce a new statistical method to reduce the ghosting artifacts. Our method proposes a local-constant statistics that assumes that the temporal signal distribution is not constant at each pixel but is locally true. This considers statistically a constant distribution in a local region around each pixel but uneven distribution in a larger scale. Under the assumption that the fixed pattern noise concentrates in a higher spatial-frequency domain than the distribution variation, we apply a wavelet method to the gain and offset image of the noise and separate out the pattern noise from the spatial variations in the temporal distribution of the scene. We compare the results to the global-constant-statistics method using a clean sequence with large artificial pattern noises. We also apply the method to a challenging CCD video sequence and a LWIR sequence to show how effective it is in reducing noise and the ghosting artifacts.

  14. Constant pressure mode extended simple gradient liquid chromatography system for micro and nanocolumns.

    PubMed

    Šesták, Jozef; Kahle, Vladislav

    2014-07-11

    Performing gradient liquid chromatography at constant pressure instead of constant flow rate has serious potential for shortening the analysis time and increasing the productivity of HPLC instruments that use gradient methods. However, in the constant pressure mode the decreasing column permeability during a long period of time negatively affects the repeatability of retention time. Thus a volume-based approach, in which the detector signal is plotted as a function of retention volume, must be taken into consideration. Traditional HPLC equipment, however, requires quite complex hardware and software modifications in order to work at constant pressure and in the volume-based mode. In this short communication, a low cost and easily feasible pressure-controlled extension of the previously described simple gradient liquid chromatography platform is proposed. A test mixture of four nitro esters was separated by 10-60% (v/v) acetone/water gradient and a high repeatability of retention volumes at 20MPa (RSD less than 0.45%) was realized. Separations were also performed at different values of pressure (20, 25, and 31MPa), and only small variations of the retention volumes (up to 0.8%) were observed. In this particular case, the gain in the analysis speed of 7% compared to the constant flow mode was realized at a constant pressure. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Different Approaches to the Professional Development of Principals: A Comparative Study of New Zealand and Singapore

    ERIC Educational Resources Information Center

    Retna, Kala S.

    2015-01-01

    One of the key factors that contribute to effective management of schools is the professional development of principals. The role of the principal has become more complex with the dynamic and constant reforms in the educational environment. The present study focuses on the professional development of principals in New Zealand and Singapore. The…

  16. Uncertainty modelling of real-time observation of a moving object: photogrammetric measurements

    NASA Astrophysics Data System (ADS)

    Ulrich, Thomas

    2015-04-01

    Photogrametric systems are widely used in the field of industrial metrology to measure kinematic tasks such as tracking robot movements. In order to assess spatiotemporal deviations of a kinematic movement, it is crucial to have a reliable uncertainty of the kinematic measurements. Common methods to evaluate the uncertainty in kinematic measurements include approximations specified by the manufactures, various analytical adjustment methods and Kalman filters. Here a hybrid system estimator in conjunction with a kinematic measurement model is applied. This method can be applied to processes which include various types of kinematic behaviour, constant velocity, variable acceleration or variable turn rates. Additionally, it has been shown that the approach is in accordance with GUM (Guide to the Expression of Uncertainty in Measurement). The approach is compared to the Kalman filter using simulated data to achieve an overall error calculation. Furthermore, the new approach is used for the analysis of a rotating system as this system has both a constant and a variable turn rate. As the new approach reduces overshoots it is more appropriate for analysing kinematic processes than the Kalman filter. In comparison with the manufacturer’s approximations, the new approach takes account of kinematic behaviour, with an improved description of the real measurement process. Therefore, this approach is well-suited to the analysis of kinematic processes with unknown changes in kinematic behaviour.

  17. Fundamental Physics from Observations of White Dwarf Stars

    NASA Astrophysics Data System (ADS)

    Bainbridge, M. B.; Barstow, M. A.; Reindl, N.; Barrow, J. D.; Webb, J. K.; Hu, J.; Preval, S. P.; Holberg, J. B.; Nave, G.; Tchang-Brillet, L.; Ayres, T. R.

    2017-03-01

    Variation in fundamental constants provide an important test of theories of grand unification. Potentially, white dwarf spectra allow us to directly observe variation in fundamental constants at locations of high gravitational potential. We study hot, metal polluted white dwarf stars, combining far-UV spectroscopic observations, atomic physics, atmospheric modelling and fundamental physics, in the search for variation in the fine structure constant. This registers as small but measurable shifts in the observed wavelengths of highly ionized Fe and Ni lines when compared to laboratory wavelengths. Measurements of these shifts were performed by Berengut et al (2013) using high-resolution STIS spectra of G191-B2B, demonstrating the validity of the method. We have extended this work by; (a) using new (high precision) laboratory wavelengths, (b) refining the analysis methodology (incorporating robust techniques from previous studies towards quasars), and (c) enlarging the sample of white dwarf spectra. A successful detection would be the first direct measurement of a gravitational field effect on a bare constant of nature. We describe our approach and present preliminary results.

  18. Full four-component relativistic calculations of the one-bond 77Se-13C spin-spin coupling constants in the series of selenium heterocycles and their parent open-chain selenides.

    PubMed

    Rusakov, Yury Yu; Rusakova, Irina L; Krivdin, Leonid B

    2014-05-01

    Four-component relativistic calculations of (77)Se-(13)C spin-spin coupling constants have been performed in the series of selenium heterocycles and their parent open-chain selenides. It has been found that relativistic effects play an essential role in the selenium-carbon coupling mechanism and could result in a contribution of as much as 15-25% of the total values of the one-bond selenium-carbon spin-spin coupling constants. In the overall contribution of the relativistic effects to the total values of (1)J(Se,C), the scalar relativistic corrections (negative in sign) by far dominate over the spin-orbit ones (positive in sign), the latter being of less than 5%, as compared to the former (ca 20%). A combination of nonrelativistic second-order polarization propagator approach (CC2) with the four-component relativistic density functional theory scheme is recommended as a versatile tool for the calculation of (1)J(Se,C). Solvent effects in the values of (1)J(Se,C) calculated within the polarizable continuum model for the solvents with different dielectric constants (ε 2.2-78.4) are next to negligible decreasing negative (1)J(Se,C) in absolute value by only about 1 Hz. The use of the locally dense basis set approach applied herewith for the calculation of (77)Se-(13)C spin-spin coupling constants is fully justified resulting in a dramatic decrease in computational cost with only 0.1-0.2-Hz loss of accuracy. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Brillouin light scattering investigation of interfacial Dzyaloshinskii–Moriya interaction in ultrathin Co/Pt nanostripe arrays

    NASA Astrophysics Data System (ADS)

    Bouloussa, H.; Yu, J.; Roussigné, Y.; Belmeguenai, M.; Stashkevitch, A.; Yang, H.; Chérif, S. M.

    2018-06-01

    Interface Dzyaloshinskii–Moriya interaction (iDMI) is known to induce spinwaves non-reciprocity in ultrathin films. Indeed, Brillouin light scattering has been used to investigate how the lateral size reduction can affect the iDMI constant in Pt (6 nm)/Co (3 nm) based-nanostripe arrays. For this, 100 and 300 nm-width nanostripes have been fabricated using e-beam lithography and ion etching, and their behaviour has then been compared to the reference continuous film. The experimental data showed that the measured iDMI induced non-reciprocity is slightly different for the 100 nm-width nanostripes with respect to the other samples. This suggests that the width of the nanostripes can influence the strength of the apparent iDMI if this dimension is comparable to the spin waves attenuation length propagating within the nanostripes. Indeed, in contrast to the other samples, the linear frequency difference (non-reciprocity) behaviour versus wavenumber for the 100 nm-width nanostripes has been analysed and discussed through two approaches: either a different iDMI constant or an iDMI constant similar to one of the continuous films with a non-zero intercept for a zero wavenumber.

  20. Ultraslow myosin molecular motors of placental contractile stem villi in humans.

    PubMed

    Lecarpentier, Yves; Claes, Victor; Lecarpentier, Edouard; Guerin, Catherine; Hébert, Jean-Louis; Arsalane, Abdelilah; Moumen, Abdelouahab; Krokidis, Xénophon; Michel, Francine; Timbely, Oumar

    2014-01-01

    Human placental stem villi (PSV) present contractile properties. In vitro mechanics were investigated in 40 human PSV. Contraction of PSV was induced by both KCl exposure (n = 20) and electrical tetanic stimulation (n = 20). Isotonic contractions were registered at several load levels ranging from zero-load up to isometric load. The tension-velocity relationship was found to be hyperbolic. This made it possible to apply the A. Huxley formalism for determining the rate constants for myosin cross-bridge (CB) attachment and detachment, CB single force, catalytic constant, myosin content, and maximum myosin ATPase activity. These molecular characteristics of myosin CBs did not differ under either KCl exposure or tetanus. A comparative approach was established from studies previously published in the literature and driven by mean of a similar method. As compared to that described in mammalian striated muscles, we showed that in human PSV, myosin CB rate constants for attachment and detachment were about 103 times lower whereas myosin ATPase activity was 105 times lower. Up to now, CB kinetics of contractile cells arranged along the long axis of the placental sheath appeared to be the slowest ever observed in any mammalian contractile tissue.

  1. Field-theoretic simulations of block copolymer nanocomposites in a constant interfacial tension ensemble.

    PubMed

    Koski, Jason P; Riggleman, Robert A

    2017-04-28

    Block copolymers, due to their ability to self-assemble into periodic structures with long range order, are appealing candidates to control the ordering of functionalized nanoparticles where it is well-accepted that the spatial distribution of nanoparticles in a polymer matrix dictates the resulting material properties. The large parameter space associated with block copolymer nanocomposites makes theory and simulation tools appealing to guide experiments and effectively isolate parameters of interest. We demonstrate a method for performing field-theoretic simulations in a constant volume-constant interfacial tension ensemble (nVγT) that enables the determination of the equilibrium properties of block copolymer nanocomposites, including when the composites are placed under tensile or compressive loads. Our approach is compatible with the complex Langevin simulation framework, which allows us to go beyond the mean-field approximation. We validate our approach by comparing our nVγT approach with free energy calculations to determine the ideal domain spacing and modulus of a symmetric block copolymer melt. We analyze the effect of numerical and thermodynamic parameters on the efficiency of the nVγT ensemble and subsequently use our method to investigate the ideal domain spacing, modulus, and nanoparticle distribution of a lamellar forming block copolymer nanocomposite. We find that the nanoparticle distribution is directly linked to the resultant domain spacing and is dependent on polymer chain density, nanoparticle size, and nanoparticle chemistry. Furthermore, placing the system under tension or compression can qualitatively alter the nanoparticle distribution within the block copolymer.

  2. Targeted proteomics coming of age - SRM, PRM and DIA performance evaluated from a core facility perspective.

    PubMed

    Kockmann, Tobias; Trachsel, Christian; Panse, Christian; Wahlander, Asa; Selevsek, Nathalie; Grossmann, Jonas; Wolski, Witold E; Schlapbach, Ralph

    2016-08-01

    Quantitative mass spectrometry is a rapidly evolving methodology applied in a large number of omics-type research projects. During the past years, new designs of mass spectrometers have been developed and launched as commercial systems while in parallel new data acquisition schemes and data analysis paradigms have been introduced. Core facilities provide access to such technologies, but also actively support the researchers in finding and applying the best-suited analytical approach. In order to implement a solid fundament for this decision making process, core facilities need to constantly compare and benchmark the various approaches. In this article we compare the quantitative accuracy and precision of current state of the art targeted proteomics approaches single reaction monitoring (SRM), parallel reaction monitoring (PRM) and data independent acquisition (DIA) across multiple liquid chromatography mass spectrometry (LC-MS) platforms, using a readily available commercial standard sample. All workflows are able to reproducibly generate accurate quantitative data. However, SRM and PRM workflows show higher accuracy and precision compared to DIA approaches, especially when analyzing low concentrated analytes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Quantifying stream nutrient uptake from ambient to saturation with instantaneous tracer additions

    NASA Astrophysics Data System (ADS)

    Covino, T. P.; McGlynn, B. L.; McNamara, R.

    2009-12-01

    Stream nutrient tracer additions and spiraling metrics are frequently used to quantify stream ecosystem behavior. However, standard approaches limit our understanding of aquatic biogeochemistry. Specifically, the relationship between in-stream nutrient concentration and stream nutrient spiraling has not been characterized. The standard constant rate (steady-state) approach to stream spiraling parameter estimation, either through elevating nutrient concentration or adding isotopically labeled tracers (e.g. 15N), provides little information regarding the stream kinetic curve that represents the uptake-concentration relationship analogous to the Michaelis-Menten curve. These standard approaches provide single or a few data points and often focus on estimating ambient uptake under the conditions at the time of the experiment. Here we outline and demonstrate a new method using instantaneous nutrient additions and dynamic analyses of breakthrough curve (BTC) data to characterize the full relationship between spiraling metrics and nutrient concentration. We compare the results from these dynamic analyses to BTC-integrated, and standard steady-state approaches. Our results indicate good agreement between these three approaches but we highlight the advantages of our dynamic method. Specifically, our new dynamic method provides a cost-effective and efficient approach to: 1) characterize full concentration-spiraling metric curves; 2) estimate ambient spiraling metrics; 3) estimate Michaelis-Menten parameters maximum uptake (Umax) and the half-saturation constant (Km) from developed uptake-concentration kinetic curves, and; 4) measure dynamic nutrient spiraling in larger rivers where steady-state approaches are impractical.

  4. The Conceptual Change Approach to Teaching Chemical Equilibrium

    ERIC Educational Resources Information Center

    Canpolat, Nurtac; Pinarbasi, Tacettin; Bayrakceken, Samih; Geban, Omer

    2006-01-01

    This study investigates the effect of a conceptual change approach over traditional instruction on students' understanding of chemical equilibrium concepts (e.g. dynamic nature of equilibrium, definition of equilibrium constant, heterogeneous equilibrium, qualitative interpreting of equilibrium constant, changing the reaction conditions). This…

  5. Does the Addition of Inert Gases at Constant Volume and Temperature Affect Chemical Equilibrium?

    ERIC Educational Resources Information Center

    Paiva, Joao C. M.; Goncalves, Jorge; Fonseca, Susana

    2008-01-01

    In this article we examine three approaches, leading to different conclusions, for answering the question "Does the addition of inert gases at constant volume and temperature modify the state of equilibrium?" In the first approach, the answer is yes as a result of a common students' alternative conception; the second approach, valid only for ideal…

  6. Application of an Artificial Neural Network to the Prediction of OH Radical Reaction Rate Constants for Evaluating Global Warming Potential.

    PubMed

    Allison, Thomas C

    2016-03-03

    Rate constants for reactions of chemical compounds with hydroxyl radical are a key quantity used in evaluating the global warming potential of a substance. Experimental determination of these rate constants is essential, but it can also be difficult and time-consuming to produce. High-level quantum chemistry predictions of the rate constant can suffer from the same issues. Therefore, it is valuable to devise estimation schemes that can give reasonable results on a variety of chemical compounds. In this article, the construction and training of an artificial neural network (ANN) for the prediction of rate constants at 298 K for reactions of hydroxyl radical with a diverse set of molecules is described. Input to the ANN consists of counts of the chemical bonds and bends present in the target molecule. The ANN is trained using 792 (•)OH reaction rate constants taken from the NIST Chemical Kinetics Database. The mean unsigned percent error (MUPE) for the training set is 12%, and the MUPE of the testing set is 51%. It is shown that the present methodology yields rate constants of reasonable accuracy for a diverse set of inputs. The results are compared to high-quality literature values and to another estimation scheme. This ANN methodology is expected to be of use in a wide range of applications for which (•)OH reaction rate constants are required. The model uses only information that can be gathered from a 2D representation of the molecule, making the present approach particularly appealing, especially for screening applications.

  7. Theoretical Study of the Transverse Dielectric Constant of Superlattices and Their Alloys. Ph.D Thesis

    NASA Technical Reports Server (NTRS)

    Kahen, K. B.

    1986-01-01

    The optical properties of III to V binary and ternary compounds and GaAs-Al(x)Ga(1-x)As superlattices are determined by calculating the real and imaginary parts of the transverse dielectric constant. Emphasis is given to determining the influence of different material and superlattice parameters on the values of the index of refraction and absorption coefficient. In order to calculate the optical properties of a material, it is necessary to compute its electronic band structure. This was accomplished by introducing a partition band structure approach based on a combination of the vector k x vector p and nonlocal pseudopotential techniques. The advantages of this approach are that it is accurate, computationally fast, analytical, and flexible. These last two properties enable incorporation of additional effects into the model, such as disorder scattering, which occurs for alloy materials and excitons. Furthermore, the model is easily extended to more complex structures, for example multiple quantum wells and superlattices. The results for the transverse dielectric constant and absorption coefficient of bulk III to V compounds compare well with other one-electron band structure models and the calculations show that for small frequencies, the index of refraction is determined mainly by the contibution of the outer regions of the Brillouin zone.

  8. Using solid phase micro extraction to determine salting-out (Setschenow) constants for hydrophobic organic chemicals.

    PubMed

    Jonker, Michiel T O; Muijs, Barry

    2010-06-01

    With increasing ionic strength, the aqueous solubility and activity of organic chemicals are altered. This so-called salting-out effect causes the hydrophobicity of the chemicals to be increased and sorption in the marine environment to be more pronounced than in freshwater systems. The process can be described with empirical salting-out or Setschenow constants, which traditionally are determined by comparing aqueous solubilities in freshwater and saline water. Aqueous solubilities of hydrophobic organic chemicals (HOCs) however are difficult to determine, which might partly explain the limited size of the existing data base on Setschenow constants for these chemicals. In this paper, we propose an alternative approach for determining the constants, which is based on the use of solid phase micro extraction (SPME) fibers. Partitioning of polycyclic aromatic hydrocarbons (PAHs) to SPME fibers increased about 1.7 times when going from de-ionized water to seawater. From the log-linear relationship between SPME fiber-water partition coefficients and ionic strength, Setschenow constants were derived, which measured on average 0.35 L mol(-1). These values agreed with literature values existing for some of the investigated PAHs and were independent of solute hydrophobicity or molar volume. Based on the present data, SPME seems to be a convenient and suitable alternative technique to determine Setschenow constants for HOCs. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  9. Comparing Results from Constant Comparative and Computer Software Methods: A Reflection about Qualitative Data Analysis

    ERIC Educational Resources Information Center

    Putten, Jim Vander; Nolen, Amanda L.

    2010-01-01

    This study compared qualitative research results obtained by manual constant comparative analysis with results obtained by computer software analysis of the same data. An investigated about issues of trustworthiness and accuracy ensued. Results indicated that the inductive constant comparative data analysis generated 51 codes and two coding levels…

  10. Dynamical approach to the cosmological constant.

    PubMed

    Mukohyama, Shinji; Randall, Lisa

    2004-05-28

    We consider a dynamical approach to the cosmological constant. There is a scalar field with a potential whose minimum occurs at a generic, but negative, value for the vacuum energy, and it has a nonstandard kinetic term whose coefficient diverges at zero curvature as well as the standard kinetic term. Because of the divergent coefficient of the kinetic term, the lowest energy state is never achieved. Instead, the cosmological constant automatically stalls at or near zero. The merit of this model is that it is stable under radiative corrections and leads to stable dynamics, despite the singular kinetic term. The model is not complete, however, in that some reheating is required. Nonetheless, our approach can at the very least reduce fine-tuning by 60 orders of magnitude or provide a new mechanism for sampling possible cosmological constants and implementing the anthropic principle.

  11. Repressing the effects of variable speed harmonic orders in operational modal analysis

    NASA Astrophysics Data System (ADS)

    Randall, R. B.; Coats, M. D.; Smith, W. A.

    2016-10-01

    Discrete frequency components such as machine shaft orders can disrupt the operation of normal Operational Modal Analysis (OMA) algorithms. With constant speed machines, they have been removed using time synchronous averaging (TSA). This paper compares two approaches for varying speed machines. In one method, signals are transformed into the order domain, and after the removal of shaft speed related components by a cepstral notching method, are transformed back to the time domain to allow normal OMA. In the other simpler approach an exponential shortpass lifter is applied directly in the time domain cepstrum to enhance the modal information at the expense of other disturbances. For simulated gear signals with speed variations of both ±5% and ±15%, the simpler approach was found to give better results The TSA method is shown not to work in either case. The paper compares the results with those obtained using a stationary random excitation.

  12. Simulating the Refractive Index Structure Constant ({C}_{n}^{2}) in the Surface Layer at Antarctica with a Mesoscale Model

    NASA Astrophysics Data System (ADS)

    Qing, Chun; Wu, Xiaoqing; Li, Xuebin; Tian, Qiguo; Liu, Dong; Rao, Ruizhong; Zhu, Wenyue

    2018-01-01

    In this paper, we introduce an approach wherein the Weather Research and Forecasting (WRF) model is coupled with the bulk aerodynamic method to estimate the surface layer refractive index structure constant (C n 2) above Taishan Station in Antarctica. First, we use the measured meteorological parameters to estimate C n 2 using the bulk aerodynamic method, and second, we use the WRF model output parameters to estimate C n 2 using the bulk aerodynamic method. Finally, the corresponding C n 2 values from the micro-thermometer are compared with the C n 2 values estimated using the WRF model coupled with the bulk aerodynamic method. We analyzed the statistical operators—the bias, root mean square error (RMSE), bias-corrected RMSE (σ), and correlation coefficient (R xy )—in a 20 day data set to assess how this approach performs. In addition, we employ contingency tables to investigate the estimation quality of this approach, which provides complementary key information with respect to the bias, RMSE, σ, and R xy . The quantitative results are encouraging and permit us to confirm the fine performance of this approach. The main conclusions of this study tell us that this approach provides a positive impact on optimizing the observing time in astronomical applications and provides complementary key information for potential astronomical sites.

  13. Warm ''pasta'' phase in the Thomas-Fermi approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avancini, Sidney S.; Menezes, Debora P.; Chiacchiera, Silvia

    In the present article, the 'pasta' phase is studied at finite temperatures within a Thomas-Fermi (TF) approach. Relativistic mean-field models, both with constant and density-dependent couplings, are used to describe this frustrated system. We compare the present results with previous ones obtained within a phase-coexistence description and conclude that the TF approximation gives rise to a richer inner ''pasta'' phase structure and the homogeneous matter appears at higher densities. Finally, the transition density calculated within TF is compared with the results for this quantity obtained with other methods.

  14. Artifacts correction for T1rho imaging with constant amplitude spin-lock

    NASA Astrophysics Data System (ADS)

    Chen, Weitian

    2017-01-01

    T1rho imaging with constant amplitude spin-lock is prone to artifacts in the presence of B1 RF and B0 field inhomogeneity. Despite significant technological progress, improvements on the robustness of constant amplitude spin-lock are necessary in order to use it for routine clinical practice. This work proposes methods to simultaneously correct for B1 RF and B0 field inhomogeneity in constant amplitude spin-lock. By setting the maximum B1 amplitude of the excitation adiabatic pulses equal to the expected constant amplitude spin-lock frequency, the spins become aligned along the effective field throughout the spin-lock process. This results in T1rho-weighted images free of artifacts, despite the spatial variation of the effective field caused by B1 RF and B0 field inhomogeneity. When the pulse is long, the relaxation effect during the adiabatic half passage may result in a non-negligible error in the mono-exponential relaxation model. A two-acquisition approach is presented to solve this issue. Simulation, phantom, and in-vivo scans demonstrate the proposed methods achieve superior image quality compared to existing methods, and that the two-acquisition method is effective in resolving the relaxation effect during the adiabatic half passage.

  15. Sensorless Load Torque Estimation and Passivity Based Control of Buck Converter Fed DC Motor

    PubMed Central

    Kumar, S. Ganesh; Thilagar, S. Hosimin

    2015-01-01

    Passivity based control of DC motor in sensorless configuration is proposed in this paper. Exact tracking error dynamics passive output feedback control is used for stabilizing the speed of Buck converter fed DC motor under various load torques such as constant type, fan type, propeller type, and unknown load torques. Under load conditions, sensorless online algebraic approach is proposed, and it is compared with sensorless reduced order observer approach. The former produces better response in estimating the load torque. Sensitivity analysis is also performed to select the appropriate control variables. Simulation and experimental results fully confirm the superiority of the proposed approach suggested in this paper. PMID:25893208

  16. A log-sinh transformation for data normalization and variance stabilization

    NASA Astrophysics Data System (ADS)

    Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.

    2012-05-01

    When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.

  17. Computational modeling approaches to quantitative structure-binding kinetics relationships in drug discovery.

    PubMed

    De Benedetti, Pier G; Fanelli, Francesca

    2018-03-21

    Simple comparative correlation analyses and quantitative structure-kinetics relationship (QSKR) models highlight the interplay of kinetic rates and binding affinity as an essential feature in drug design and discovery. The choice of the molecular series, and their structural variations, used in QSKR modeling is fundamental to understanding the mechanistic implications of ligand and/or drug-target binding and/or unbinding processes. Here, we discuss the implications of linear correlations between kinetic rates and binding affinity constants and the relevance of the computational approaches to QSKR modeling. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Improving the treatment of coarse-grain electrostatics: CVCEL.

    PubMed

    Ceres, N; Lavery, R

    2015-12-28

    We propose an analytic approach for calculating the electrostatic energy of proteins or protein complexes in aqueous solution. This method, termed CVCEL (Circular Variance Continuum ELectrostatics), is fitted to Poisson calculations and is able to reproduce the corresponding energies for different choices of solute dielectric constant. CVCEL thus treats both solute charge interactions and charge self-energies, and it can also deal with salt solutions. Electrostatic damping notably depends on the degree of solvent exposure of the charges, quantified here in terms of circular variance, a measure that reflects the vectorial distribution of the neighbors around a given center. CVCEL energies can be calculated rapidly and have simple analytical derivatives. This approach avoids the need for calculating effective atomic volumes or Born radii. After describing how the method was developed, we present test results for coarse-grain proteins of different shapes and sizes, using different internal dielectric constants and different salt concentrations and also compare the results with those from simple distance-dependent models. We also show that the CVCEL approach can be used successfully to calculate the changes in electrostatic energy associated with changes in protein conformation or with protein-protein binding.

  19. Universal behavior in ideal slip

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Ferrante, John; Smith, John R.

    1991-01-01

    The slip energies and stresses are computed for defect-free crystals of Ni, Cu, Ag, and Al using the many-atom approach. A simple analytical expression for the slip energies is obtained, leading to a universal form for slip, with the energy scaled by the surface energy and displacement scaled by the lattice constant. Maximum stresses are found to be somewhat larger than but comparable with experimentally determined maximum whisker strengths.

  20. Thermodiffusion in concentrated ferrofluids: A review and current experimental and numerical results on non-magnetic thermodiffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprenger, Lisa, E-mail: Lisa.Sprenger@tu-dresden.de; Lange, Adrian; Odenbach, Stefan

    2013-12-15

    Ferrofluids are colloidal suspensions consisting of magnetic nanoparticles dispersed in a carrier liquid. Their thermodiffusive behaviour is rather strong compared to molecular binary mixtures, leading to a Soret coefficient (S{sub T}) of 0.16 K{sup −1}. Former experiments with dilute magnetic fluids have been done with thermogravitational columns or horizontal thermodiffusion cells by different research groups. Considering the horizontal thermodiffusion cell, a former analytical approach has been used to solve the phenomenological diffusion equation in one dimension assuming a constant concentration gradient over the cell's height. The current experimental work is based on the horizontal separation cell and emphasises the comparison ofmore » the concentration development in different concentrated magnetic fluids and at different temperature gradients. The ferrofluid investigated is the kerosene-based EMG905 (Ferrotec) to be compared with the APG513A (Ferrotec), both containing magnetite nanoparticles. The experiments prove that the separation process linearly depends on the temperature gradient and that a constant concentration gradient develops in the setup due to the separation. Analytical one dimensional and numerical three dimensional approaches to solve the diffusion equation are derived to be compared with the solution used so far for dilute fluids to see if formerly made assumptions also hold for higher concentrated fluids. Both, the analytical and numerical solutions, either in a phenomenological or a thermodynamic description, are able to reproduce the separation signal gained from the experiments. The Soret coefficient can then be determined to 0.184 K{sup −1} in the analytical case and 0.29 K{sup −1} in the numerical case. Former theoretical approaches for dilute magnetic fluids underestimate the strength of the separation in the case of a concentrated ferrofluid.« less

  1. Platelet-rich plasma versus steroid injection for subacromial impingement syndrome.

    PubMed

    Say, F; Gurler, D; Bulbul, M

    2016-04-01

    To compare the 6-week and 6-month outcome in 60 patients who received a single-dose injection of platelet-rich plasma (PRP) or steroid for subacromial impingement syndrome (SIS). 22 men and 38 women (mean age, 49.7 years) opted to receive a single-dose injection of PRP (n=30) or steroid (n=30) for SIS that had not responded to conservative treatment for >3 months. The PRP or a mixture of 1 ml 40 mg methylprednisolone and 8 ml prilocaine was administered via a dorsolateral approach through the interval just beneath the dorsal acromial edge. Both groups were instructed to perform standard rotator cuff stretching and strengthening exercises for 6 weeks. The use of non-steroid anti-inflammatory drugs was prohibited. Patients were evaluated before and 6 weeks and 6 months after treatment using the Constant score, visual analogue scale (VAS) for pain, and range of motion (ROM) of the shoulder. No local or systemic complication occurred. Improvement in the Constant score and VAS for pain at week 6 and month 6 was significantly better following steroid than PRP injection. The difference in the Constant score was greater than the mean clinically important difference of 10.4. Nonetheless, the 2 groups were comparable for improvement in ROM of the shoulder. Steroid injection was more effective than PRP injection for treatment of SIS in terms of the Constant score and VAS for pain at 6 weeks and 6 months.

  2. Microstrip Ring Resonator for Soil Moisture Measurements

    NASA Technical Reports Server (NTRS)

    Sarabandi, Kamal; Li, Eric S.

    1993-01-01

    Accurate determination of spatial soil moisture distribution and monitoring its temporal variation have a significant impact on the outcomes of hydrologic, ecologic, and climatic models. Development of a successful remote sensing instrument for soil moisture relies on the accurate knowledge of the soil dielectric constant (epsilon(sub soil)) to its moisture content. Two existing methods for measurement of dielectric constant of soil at low and high frequencies are, respectively, the time domain reflectometry and the reflection coefficient measurement using an open-ended coaxial probe. The major shortcoming of these methods is the lack of accurate determination of the imaginary part of epsilon(sub soil). In this paper a microstrip ring resonator is proposed for the accurate measurement of soil dielectric constant. In this technique the microstrip ring resonator is placed in contact with soil medium and the real and imaginary parts of epsilon(sub soil) are determined from the changes in the resonant frequency and the quality factor of the resonator respectively. The solution of the electromagnetic problem is obtained using a hybrid approach based on the method of moments solution of the quasi-static formulation in conjunction with experimental data obtained from reference dielectric samples. Also a simple inversion algorithm for epsilon(sub soil) = epsilon'(sub r) + j(epsilon"(sub r)) based on regression analysis is obtained. It is shown that the wide dynamic range of the measured quantities provides excellent accuracy in the dielectric constant measurement. A prototype microstrip ring resonator at L-band is designed and measurements of soil with different moisture contents are presented and compared with other approaches.

  3. The variance of the locally measured Hubble parameter explained with different estimators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odderskov, Io; Hannestad, Steen; Brandbyge, Jacob, E-mail: isho07@phys.au.dk, E-mail: sth@phys.au.dk, E-mail: jacobb@phys.au.dk

    We study the expected variance of measurements of the Hubble constant, H {sub 0}, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N -body simulations. We compare the variance with that obtained by carrying out mock observations in the N-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend to obtain a smaller variancemore » than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H {sub 0} from CMB measurements and the value measured in the local universe, these considerations are important in light of the percent determination of the Hubble constant in the local universe.« less

  4. Multiverse understanding of cosmological coincidences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bousso, Raphael; Hall, Lawrence J.; Nomura, Yasunori

    2009-09-15

    There is a deep cosmological mystery: although dependent on very different underlying physics, the time scales of structure formation, of galaxy cooling (both radiatively and against the CMB), and of vacuum domination do not differ by many orders of magnitude, but are all comparable to the present age of the universe. By scanning four landscape parameters simultaneously, we show that this quadruple coincidence is resolved. We assume only that the statistical distribution of parameter values in the multiverse grows towards certain catastrophic boundaries we identify, across which there are drastic regime changes. We find order-of-magnitude predictions for the cosmological constant,more » the primordial density contrast, the temperature at matter-radiation equality, the typical galaxy mass, and the age of the universe, in terms of the fine structure constant and the electron, proton and Planck masses. Our approach permits a systematic evaluation of measure proposals; with the causal patch measure, we find no runaway of the primordial density contrast and the cosmological constant to large values.« less

  5. Determination of acid ionization constants for weak acids by osmometry and the instrumental analysis self-evaluation feedback approach to student preparation of solutions

    NASA Astrophysics Data System (ADS)

    Kakolesha, Nyanguila

    One focus of this work was to develop of an alternative method to conductivity for determining the acid ionization constants. Computer-controlled osmometry is one of the emerging analytical tools in industrial research and clinical laboratories. It is slowly finding its way into chemistry laboratories. The instrument's microprocessor control ensures shortened data collection time, repeatability, accuracy, and automatic calibration. The equilibrium constants of acetic acid, chloroacetic acid, bromoacetic acid, cyanoacetic acid, and iodoacetic acid have been measured using osmometry and their values compared with the existing literature values obtained, usually, from conductometric measurements. Ionization constant determined by osmometry for the moderately strong weak acids were in reasonably good agreement with literature values. The results showed that two factors, the ionic strength and the osmotic coefficient, exert opposite effects in solutions of such weak acids. Another focus of the work was analytical chemistry students solution preparation skills. The prevailing teacher-structured experiments leave little room for students' ingenuity in quantitative volumetric analysis. The purpose of this part of the study was to improve students' skills in making solutions using instrument feedback in a constructivist-learning model. After making some solutions by weighing and dissolving solutes or by serial dilution, students used the spectrophotometer and the osmometer to compare their solutions with standard solutions. Students perceived the instrument feedback as a nonthreatening approach to monitoring the development of their skill levels and liked to clarify their understanding through interacting with an instructor-observer. An assessment of the instrument feedback and the constructivist model indicated that students would assume responsibility for their own learning if given the opportunity. This study involved 167 students enrolled in Quantitative Chemical Analysis from fall 1998 through spring 2001. Surveys eliciting students' reactions to the instrument feedback approach showed an overwhelmingly positive response. The results of this research demonstrated that self-evaluation with instrumental feedback was a useful tool in helping students apply the knowledge they have acquired in lectures to the practice of chemistry. A demographic survey to determine whether part-time or full-time jobs had a negative impact on their experiment grades showed a small but significant correlation between hours worked and grade earned. However, the study showed that grades students earned on this experiment were predictive of overall semester lab grades.

  6. Tibiofemoral wear in standard and non-standard squat: implication for total knee arthroplasty.

    PubMed

    Fekete, Gusztáv; Sun, Dong; Gu, Yaodong; Neis, Patric Daniel; Ferreira, Ney Francisco; Innocenti, Bernardo; Csizmadia, Béla M

    2017-01-01

    Due to the more resilient biomaterials, problems related to wear in total knee replacements (TKRs) have decreased but not disappeared. In the design-related factors, wear is still the second most important mechanical factor that limits the lifetime of TKRs and it is also highly influenced by the local kinematics of the knee. During wear experiments, constant load and slide-roll ratio is frequently applied in tribo-tests beside other important parameters. Nevertheless, numerous studies demonstrated that constant slide-roll ratio is not accurate approach if TKR wear is modelled, while instead of a constant load, a flexion-angle dependent tibiofemoral force should be involved into the wear model to obtain realistic results. A new analytical wear model, based upon Archard's law, is introduced, which can determine the effect of the tibiofemoral force and the varying slide-roll on wear between the tibiofemoral connection under standard and non-standard squat movement. The calculated total wear with constant slide-roll during standard squat was 5.5 times higher compared to the reference value, while if total wear includes varying slide-roll during standard squat, the calculated wear was approximately 6.25 times higher. With regard to non-standard squat, total wear with constant slide-roll during standard squat was 4.16 times higher than the reference value. If total wear included varying slide-roll, the calculated wear was approximately 4.75 times higher. It was demonstrated that the augmented force parameter solely caused 65% higher wear volume while the slide-roll ratio itself increased wear volume by 15% higher compared to the reference value. These results state that the force component has the major effect on wear propagation while non-standard squat should be proposed for TKR patients as rehabilitation exercise.

  7. Tibiofemoral wear in standard and non-standard squat: implication for total knee arthroplasty

    PubMed Central

    Sun, Dong; Gu, Yaodong; Neis, Patric Daniel; Ferreira, Ney Francisco; Innocenti, Bernardo; Csizmadia, Béla M.

    2017-01-01

    Summary Introduction Due to the more resilient biomaterials, problems related to wear in total knee replacements (TKRs) have decreased but not disappeared. In the design-related factors, wear is still the second most important mechanical factor that limits the lifetime of TKRs and it is also highly influenced by the local kinematics of the knee. During wear experiments, constant load and slide-roll ratio is frequently applied in tribo-tests beside other important parameters. Nevertheless, numerous studies demonstrated that constant slide-roll ratio is not accurate approach if TKR wear is modelled, while instead of a constant load, a flexion-angle dependent tibiofemoral force should be involved into the wear model to obtain realistic results. Methods A new analytical wear model, based upon Archard’s law, is introduced, which can determine the effect of the tibiofemoral force and the varying slide-roll on wear between the tibiofemoral connection under standard and non-standard squat movement. Results The calculated total wear with constant slide-roll during standard squat was 5.5 times higher compared to the reference value, while if total wear includes varying slide-roll during standard squat, the calculated wear was approximately 6.25 times higher. With regard to non-standard squat, total wear with constant slide-roll during standard squat was 4.16 times higher than the reference value. If total wear included varying slide-roll, the calculated wear was approximately 4.75 times higher. Conclusions It was demonstrated that the augmented force parameter solely caused 65% higher wear volume while the slide-roll ratio itself increased wear volume by 15% higher compared to the reference value. These results state that the force component has the major effect on wear propagation while non-standard squat should be proposed for TKR patients as rehabilitation exercise. PMID:29721453

  8. Evaluation of synthesized voice approach callouts /SYNCALL/

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.

    1981-01-01

    The two basic approaches to the generation of 'synthesized' speech include a utilization of analog recorded human speech and a construction of speech entirely from algorithms applied to constants describing speech sounds. Given the availability of synthesized speech displays for man-machine systems, research is needed to study suggested applications for speech and design principles for speech displays. The present investigation is concerned with a study for which new performance measures were developed. A number of air carrier approach and landing accidents during low or impaired visibility have been associated with the absence of approach callouts. The study had the purpose to compare a pilot-not-flying (PNF) approach callout system to a system composed of PNF callouts augmented by an automatic synthesized voice callout system (SYNCALL). Pilots were found to favor the use of a SYNCALL system containing certain modifications.

  9. A novel scaling approach for sooting laminar coflow flames at elevated pressures

    NASA Astrophysics Data System (ADS)

    Abdelgadir, Ahmed; Steinmetz, Scott A.; Attili, Antonio; Bisetti, Fabrizio; Roberts, William L.

    2016-11-01

    Laminar coflow diffusion flames are often used to study soot formation at elevated pressures due to their well-characterized configuration. In these expriments, these flames are operated at constant mass flow rate (constant Reynolds number) at increasing pressures. Due to the effect of gravity, the flame shape changes and as a results, the mixing field changes, which in return has a great effect on soot formation. In this study, a novel scaling approach of the flame at different pressures is proposed. In this approach, both the Reynolds and Grashof's numbers are kept constant so that the effect of gravity is the same at all pressures. In order to keep the Grashof number constant, the diameter of the nozzle is modified as pressure varies. We report both numerical and experimental data proving that this approach guarantees the same nondimensional flow fields over a broad range of pressures. In the range of conditions studied, the Damkoehler number, which varies when both Reynolds and Grashof numbers are kept constant, is shown to play a minor role. Hence, a set of suitable flames for investigating soot formation at pressure is identified. This research made use of the resources of IT Research Computing at King Abdullah University of Science & Technology (KAUST), Saudi Arabia.

  10. [Radial Approach for Percutaneous Coronary Interventions in Patients With Ischemic Heart Disease: Advantages and Disadvantages, Complications Rate in Comparison With Femoral Approach].

    PubMed

    Fettser, D V; Batyraliev, T A; Pershukov, I V; Vanyukov, A E; Sidorenko, B A

    2017-05-01

    During recent 10-15 years, percutaneous coronary interventions (PCI) have reached a new level of efficacy and safety. Rate of serious coronary complications has decreased. That to a greater degree exposes the problem of peripheral complications at the site of arterial approach. At the same time portion of patients older than 75 years in the total pool of PCI constantly increases. Number of patients with pronounced obesity also grows each year. Radial approach for PCI allows to substantially decrease rate of peripheral complications at the account of lowered rate of bleedings, and to shorten duration of hospitalization. In this literature review we present results of a number of relevant clinical studies including those which contained groups of elderly patients and of patients with obesity. We also have summarized main advantages and disadvantages of radial approach as compared with femoral approach for coronary angiography and PCI.

  11. In-situ preparation of hierarchical flower-like TiO2/carbon nanostructures as fillers for polymer composites with enhanced dielectric properties

    PubMed Central

    Xu, Nuoxin; Zhang, Qilong; Yang, Hui; Xia, Yuting; Jiang, Yongchang

    2017-01-01

    Novel three-dimensional hierarchical flower-like TiO2/carbon (TiO2/C) nanostructures were in-situ synthesized via a solvothermal method involving calcination of organic precursor under inert atmosphere. The composite films comprised of P (VDF-HFP) and as-prepared hierarchical flower-like TiO2/C were fabricated by a solution casting and hot-pressing approach. The results reveal that loading the fillers with a small amount of carbon is an effective way to improve the dielectric constant and suppress the dielectric loss. In addition, TiO2/C particles with higher carbon contents exhibit superiority in promoting the dielectric constants of composites when compared with their noncarbon counterparts. For instance, the highest dielectric constant (330.6) of the TiO2/C composites is 10 times over that of noncarbon-TiO2-filled ones at the same filler volume fraction, and 32 times over that of pristine P (VDF-HFP). The enhancement in the dielectric constant can be attributed to the formation of a large network, which is composed of local micro-capacitors with carbon particles as electrodes and TiO2 as the dielectric in between. PMID:28262766

  12. Stochastic approach for an unbiased estimation of the probability of a successful separation in conventional chromatography and sequential elution liquid chromatography.

    PubMed

    Ennis, Erin J; Foley, Joe P

    2016-07-15

    A stochastic approach was utilized to estimate the probability of a successful isocratic or gradient separation in conventional chromatography for numbers of sample components, peak capacities, and saturation factors ranging from 2 to 30, 20-300, and 0.017-1, respectively. The stochastic probabilities were obtained under conditions of (i) constant peak width ("gradient" conditions) and (ii) peak width increasing linearly with time ("isocratic/constant N" conditions). The isocratic and gradient probabilities obtained stochastically were compared with the probabilities predicted by Martin et al. [Anal. Chem., 58 (1986) 2200-2207] and Davis and Stoll [J. Chromatogr. A, (2014) 128-142]; for a given number of components and peak capacity the same trend is always observed: probability obtained with the isocratic stochastic approach

  13. A direct comparison of U.S. Environmental Protection Agency's method 304B and batch tests for determining activated-sludge biodegradation rate constants for volatile organic compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cano, M.L.; Wilcox, M.E.; Compernolle, R. van

    Biodegradation rate constants for volatile organic compounds (VOCs) in activated-sludge systems are needed to quantify emissions. One current US environmental Protection Agency method for determining a biodegradation rate constant is Method 304B. In this approach, a specific activated-sludge unit is simulated by a continuous biological treatment system with a sealed headspace. Batch experiments, however, can be alternatives to Method 304B. Two of these batch methods are the batch test that uses oxygen addition (BOX) and the serum bottle test (SBT). In this study, Method 304B was directly compared to BOX and SBT experiments. A pilot-scale laboratory reactor was constructed tomore » serve as the Method 304B unit. Biomass from the unit was also used to conduct BOX and modified SBT experiments (modification involved use of a sealed draft-tube reactor with a headspace recirculation pump instead of a serum bottle) for 1,2-dichloroethane, diisopropyl ether, methyl tertiary butyl ether, and toluene. Three experimental runs--each consisting of one Method 304B experiment, one BOX experiment, and one modified SBT experiment--were completed. The BOX and SBT data for each run were analyzed using a Monod model, and best-fit biodegradation kinetic parameters were determined for each experiment, including a first-order biodegradation rate constant (K{sub 1}). Experimental results suggest that for readily biodegradable VOCs the two batch techniques can provide improved means of determining biodegradation rate constants compared with Method 304B. In particular, these batch techniques avoid the Method 304B problem associated with steady-state effluent concentrations below analytical detection limits. However, experimental results also suggest that the two batch techniques should not be used to determine biodegradation rate constants for slowly degraded VOCs (i.e., K{sub 1} {lt} 0.1 L/g VSS-h).« less

  14. Synthesis and Study of Optical Properties of Graphene/TiO2 Composites Using UV-VIS Spectroscopy

    NASA Astrophysics Data System (ADS)

    Rathod, P. B.; Waghuley, S. A.

    2016-09-01

    Graphene and TiO2 were synthesized using electrochemical exfoliation and co-precipitation methods, respectively. An ex situ approach was adopted for the graphene/TiO2 composites. The conformation of graphene in the TiO2 samples was examined through X-ray diffraction. Optical properties of the as-synthesised composites such as optical absorption, extinction coefficient, refractive index, real dielectric constant, imaginary dielectric constant, dissipation factor, and optical conductivity were measured using UV-Vis spectroscopy. The varying concentration of graphene in TiO2 affects the optical properties which appear different for 10 wt.% as compared to 5 wt.% graphene/ TiO2 composite. The composites exhibit an absorption peak at 300 nm with a decrease in band gap for 10 wt.% as compared to 5 wt.% graphene/TiO2 composite. The maximum optical conductivity for the graphene/TiO2 composite of 10 wt.% was found to be 1.86·10-2 Ω-1·m-1 at 300 nm.

  15. Four-Component Relativistic Density-Functional Theory Calculations of Nuclear Spin-Rotation Constants: Relativistic Effects in p-Block Hydrides.

    PubMed

    Komorovsky, Stanislav; Repisky, Michal; Malkin, Elena; Demissie, Taye B; Ruud, Kenneth

    2015-08-11

    We present an implementation of the nuclear spin-rotation (SR) constants based on the relativistic four-component Dirac-Coulomb Hamiltonian. This formalism has been implemented in the framework of the Hartree-Fock and Kohn-Sham theory, allowing assessment of both pure and hybrid exchange-correlation functionals. In the density-functional theory (DFT) implementation of the response equations, a noncollinear generalized gradient approximation (GGA) has been used. The present approach enforces a restricted kinetic balance condition for the small-component basis at the integral level, leading to very efficient calculations of the property. We apply the methodology to study relativistic effects on the spin-rotation constants by performing calculations on XHn (n = 1-4) for all elements X in the p-block of the periodic table and comparing the effects of relativity on the nuclear SR tensors to that observed for the nuclear magnetic shielding tensors. Correlation effects as described by the density-functional theory are shown to be significant for the spin-rotation constants, whereas the differences between the use of GGA and hybrid density functionals are much smaller. Our calculated relativistic spin-rotation constants at the DFT level of theory are only in fair agreement with available experimental data. It is shown that the scaling of the relativistic effects for the spin-rotation constants (varying between Z(3.8) and Z(4.5)) is as strong as for the chemical shieldings but with a much smaller prefactor.

  16. Modeling and Control for Microgrids

    NASA Astrophysics Data System (ADS)

    Steenis, Joel

    Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating range. In this document a linear model is derived for an inverter connected to the Thevenin equivalent of a microgrid. This model is then compared to a nonlinear simulation model and analyzed using the open and closed loop systems in both the time and frequency domains. The modeling error is quantified with emphasis on its use for controller design purposes. Control design examples are given using a Glover McFarlane controller, gain scheduled Glover McFarlane controller, and bumpless transfer controller which are compared to the standard droop control approach. These examples serve as a guide to illustrate the use of multi-variable modeling techniques in the context of robust controller design and show that gain scheduled MIMO control techniques can extend the operating range of a microgrid. A hardware implementation is used to compare constant gain droop controllers with Glover McFarlane controllers and shows a clear advantage of the Glover McFarlane approach.

  17. Estimation of the Operating Characteristics when the Test Information of the Old Test is not Constant II: Simple Sum Procedure of the Conditional P.D.F. Approach/Normal Approach Method Using Three Subtests of the Old Test. Research Report 80-4.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    The rationale behind the method of estimating the operating characteristics of discrete item responses when the test information of the Old Test is not constant was presented previously. In the present study, two subtests of the Old Test, i.e. Subtests 1, and 2, each of which has a different non-constant test information function, are used in…

  18. Extracting Diffusion Constants from Echo-Time-Dependent PFG NMR Data Using Relaxation-Time Information

    NASA Astrophysics Data System (ADS)

    Vandusschoten, D.; Dejager, P. A.; Vanas, H.

    Heterogeneous (bio)systems are often characterized by several water-containing compartments that differ in relaxation time values and diffusion constants. Because of the relatively small differences among these diffusion constants, nonoptimal measuring conditions easily lead to the conclusion that a single diffusion constant suffices to describe the water mobility in a heterogeneous (bio)system. This paper demonstrates that the combination of a T2 measurement and diffusion measurements at various echo times (TE), based on the PFG MSE sequence, enables the accurate determination of diffusion constants which are less than a factor of 2 apart. This new method gives errors of the diffusion constant below 10% when two fractions are present, while the standard approach of a biexponential fit to the diffusion data in identical circumstances gives larger (>25%) errors. On application of this approach to water in apple parenchyma tissue, the diffusion constant of water in the vacuole of the cells ( D = 1.7 × 10 -9 m 2/s) can be distinguished from that of the cytoplasm ( D = 1.0 × 10 -9 m 2/s). Also, for mung bean seedlings, the cell size determined by PFG MSE measurements increased from 65 to 100 μm when the echo time increased from 150 to 900 ms, demonstrating that the interpretation of PFG SE data used to investigate cell sizes is strongly dependent on the T2 values of the fractions within the sample. Because relaxation times are used to discriminate the diffusion constants, we propose to name this approach diffusion analysis by relaxation- time- separated (DARTS) PFG NMR.

  19. Mental structures and hierarchical brain processing. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch

    NASA Astrophysics Data System (ADS)

    Petkov, C. I.

    2014-09-01

    Fitch proposes an appealing hypothesis that humans are dendrophiles, who constantly build mental trees supported by analogous hierarchical brain processes [1]. Moreover, it is argued that, by comparison, nonhuman animals build flat or more compact behaviorally-relevant structures. Should we thus expect less impressive hierarchical brain processes in other animals? Not necessarily.

  20. Nonlinear Symplectic Attitude Estimation for Small Satellites

    DTIC Science & Technology

    2006-08-01

    Vol. 45, No. 3, 2000, pp. 477-482. 7 Gelb, A., editor, Applied Optimal Estimation, The M.I.T. Press, Cambridge, MA, 1974. ’ Brown , R. G. and Hwang , P. Y...demonstrate orders of magnitude improvement in state and constants of motion estimation when compared to extended and iterative Kalman methods...satellites have fallen into the former category, including the ubiquitous Extended Kalman Filter (EKF).2 𔄁- 9 While this approach has been used

  1. The battle against violence in U.S. hospitals: an analysis of the recent I IAHSS Foundation's healthcare crime surveys.

    PubMed

    Vellani, Karim H

    2016-10-01

    In this article, the author analyzes the possible reasons for the reported drop in hospital violence in the 2016IAHSS Crime Survey compared to previous surveys. He also reviews the one statistic that has remained constant in all the recent crime surveys and recommends an approach in violence prevention programs that may prove successful in reducing workplace violence and staff injuries.

  2. CORRELATIONS IN LIGHT FROM A LASER AT THRESHOLD,

    DTIC Science & Technology

    Temporal correlations in the electromagnetic field radiated by a laser in the threshold region of oscillation (from one tenth of threshold intensity...to ten times threshold ) were measured by photoelectron counting techniques. The experimental results were compared with theoretical predictions based...shows that the intensity fluctuations at about one tenth threshold are nearly those of a Gaussian field and continuously approach those of a constant amplitude field as the intensity is increased. (Author)

  3. Elastic constants of random solid solutions by SQS and CPA approaches: the case of fcc Ti-Al.

    PubMed

    Tian, Li-Yun; Hu, Qing-Miao; Yang, Rui; Zhao, Jijun; Johansson, Börje; Vitos, Levente

    2015-08-12

    Special quasi-random structure (SQS) and coherent potential approximation (CPA) are techniques widely employed in the first-principles calculations of random alloys. Here we scrutinize these approaches by focusing on the local lattice distortion (LLD) and the crystal symmetry effects. We compare the elastic parameters obtained from SQS and CPA calculations, taking the random face-centered cubic (fcc) Ti(1-x)Al(x) (0 ≤ x ≤ 1) alloy as an example of systems with components showing different electronic structures and bonding characteristics. For the CPA and SQS calculations, we employ the Exact Muffin-Tin Orbitals (EMTO) method and the pseudopotential method as implemented in the Vienna Ab initio Simulation Package (VASP), respectively. We show that the predicted trends of the VASP-SQS and EMTO-CPA parameters against composition are in good agreement with each other. The energy associated with the LLD increases with x up to x = 0.625 ~ 0.750 and drops drastically thereafter. The influence of the LLD on the lattice constants and C12 elastic constant is negligible. C11 and C44 decrease after atomic relaxation for alloys with large LLD, however, the trends of C11 and C44 are not significantly affected. In general, the uncertainties in the elastic parameters associated with the symmetry lowering turn out to be superior to the differences between the two techniques including the effect of LLD.

  4. A time-based potential step analysis of electrochemical impedance incorporating a constant phase element: a study of commercially pure titanium in phosphate buffered saline.

    PubMed

    Ehrensberger, Mark T; Gilbert, Jeremy L

    2010-05-01

    The measurement of electrochemical impedance is a valuable tool to assess the electrochemical environment that exists at the surface of metallic biomaterials. This article describes the development and validation of a new technique, potential step impedance analysis (PSIA), to assess the electrochemical impedance of materials whose interface with solution can be modeled as a simplified Randles circuit that is modified with a constant phase element. PSIA is based upon applying a step change in voltage to a working electrode and analyzing the subsequent current transient response in a combined time and frequency domain technique. The solution resistance, polarization resistance, and interfacial capacitance are found directly in the time domain. The experimental current transient is numerically transformed to the frequency domain to determine the constant phase exponent, alpha. This combined time and frequency approach was tested using current transients generated from computer simulations, from resistor-capacitor breadboard circuits, and from commercially pure titanium samples immersed in phosphate buffered saline and polarized at -800 mV or +1000 mV versus Ag/AgCl. It was shown that PSIA calculates equivalent admittance and impedance behavior over this range of potentials when compared to standard electrochemical impedance spectroscopy. This current transient approach characterizes the frequency response of the system without the need for expensive frequency response analyzers or software. Copyright 2009 Wiley Periodicals, Inc.

  5. Dielectric properties of organic solvents from non-polarizable molecular dynamics simulation with electronic continuum model and density functional theory.

    PubMed

    Lee, Sanghun; Park, Sung Soo

    2011-11-03

    Dielectric constants of electrolytic organic solvents are calculated employing nonpolarizable Molecular Dynamics simulation with Electronic Continuum (MDEC) model and Density Functional Theory. The molecular polarizabilities are obtained by the B3LYP/6-311++G(d,p) level of theory to estimate high-frequency refractive indices while the densities and dipole moment fluctuations are computed using nonpolarizable MD simulations. The dielectric constants reproduced from these procedures are evaluated to provide a reliable approach for estimating the experimental data. An additional feature, two representative solvents which have similar molecular weights but are different dielectric properties, i.e., ethyl methyl carbonate and propylene carbonate, are compared using MD simulations and the distinctly different dielectric behaviors are observed at short times as well as at long times.

  6. Incomplete Data in Smart Grid: Treatment of Values in Electric Vehicle Charging Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majipour, Mostafa; Chu, Peter; Gadh, Rajit

    2014-11-03

    In this paper, five imputation methods namely Constant (zero), Mean, Median, Maximum Likelihood, and Multiple Imputation methods have been applied to compensate for missing values in Electric Vehicle (EV) charging data. The outcome of each of these methods have been used as the input to a prediction algorithm to forecast the EV load in the next 24 hours at each individual outlet. The data is real world data at the outlet level from the UCLA campus parking lots. Given the sparsity of the data, both Median and Constant (=zero) imputations improved the prediction results. Since in most missing value casesmore » in our database, all values of that instance are missing, the multivariate imputation methods did not improve the results significantly compared to univariate approaches.« less

  7. Virial coefficients and demixing in the Asakura-Oosawa model.

    PubMed

    López de Haro, Mariano; Tejero, Carlos F; Santos, Andrés; Yuste, Santos B; Fiumara, Giacomo; Saija, Franz

    2015-01-07

    The problem of demixing in the Asakura-Oosawa colloid-polymer model is considered. The critical constants are computed using truncated virial expansions up to fifth order. While the exact analytical results for the second and third virial coefficients are known for any size ratio, analytical results for the fourth virial coefficient are provided here, and fifth virial coefficients are obtained numerically for particular size ratios using standard Monte Carlo techniques. We have computed the critical constants by successively considering the truncated virial series up to the second, third, fourth, and fifth virial coefficients. The results for the critical colloid and (reservoir) polymer packing fractions are compared with those that follow from available Monte Carlo simulations in the grand canonical ensemble. Limitations and perspectives of this approach are pointed out.

  8. Determination of the molecular complexation constant between alprostadil and alpha-cyclodextrin by conductometry: implications for a freeze-dried formulation.

    PubMed

    Sheehy, Philip M; Ramstad, Tore

    2005-10-04

    The binding constant between alprostadil (PGE1) and alpha-cyclodextrin (alpha-CD) was determined at four temperatures using conductance measurements. Alpha-cyclodextrin is an excipient material in Caverject dual chamber syringe (DCS) that was added to enhance stability. The binding constant was used to calculate the amount of PGE1 free upon reconstitution and injection, since only the free drug is clinically active. The conductivity measurement is based on a decrease in specific conductance as alprostadil is titrated with alpha-CD. The change in conductivity was plotted versus free ligand concentration (alpha-CD) to generate a binding curve. As the value of the binding constant proved to be dependent on substrate concentration, it is really a pseudo binding constant. A value of 742+/-60 M(-1) was obtained for a 0.5 mM solution of alprostadil at 27 degrees C and a value of 550+/-52 M(-1) at 37 degrees C. These results compare favorably to values previously obtained by NMR and capillary electrophoresis. Calculation of the fraction PGE1 free upon reconstitution and injection show it to approach the desired outcome of one. Hence, the amount of drug delivered by Caverject DCS is nominally equivalent to that delivered by Caverject S. Po., a predecessor product that contains no alpha-cyclodextrin.

  9. Implementing an Equilibrium Law Teaching Sequence for Secondary School Students to Learn Chemical Equilibrium

    ERIC Educational Resources Information Center

    Ghirardi, Marco; Marchetti, Fabio; Pettinari, Claudio; Regis, Alberto; Roletto, Ezio

    2015-01-01

    A didactic sequence is proposed for the teaching of chemical equilibrium law. In this approach, we have avoided the kinetic derivation and the thermodynamic justification of the equilibrium constant. The equilibrium constant expression is established empirically by a trial-and-error approach. Additionally, students learn to use the criterion of…

  10. Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    ERIC Educational Resources Information Center

    Camporesi, Roberto

    2011-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of…

  11. A variable mixing-length ratio for convection theory

    NASA Technical Reports Server (NTRS)

    Chan, K. L.; Wolff, C. L.; Sofia, S.

    1981-01-01

    It is argued that a natural choice for the local mixing length in the mixing-length theory of convection has a value proportional to the local density scale height of the convective bubbles. The resultant variable mixing-length ratio (the ratio between the mixing length and the pressure scale height) of this theory is enhanced in the superadiabatic region and approaches a constant in deeper layers. Numerical tests comparing the new mixing length successfully eliminate most of the density inversion that typically plagues conventional results. The new approach also seems to indicate the existence of granular motion at the top of the convection zone.

  12. Communication — Modeling polymer-electrolyte fuel-cell agglomerates with double-trap kinetics

    DOE PAGES

    Pant, Lalit M.; Weber, Adam Z.

    2017-04-14

    A new semi-analytical agglomerate model is presented for polymer-electrolyte fuel-cell cathodes. The model uses double-trap kinetics for the oxygen-reduction reaction, which can capture the observed potential-dependent coverage and Tafel-slope changes. An iterative semi-analytical approach is used to obtain reaction rate constants from the double-trap kinetics, oxygen concentration at the agglomerate surface, and overall agglomerate reaction rate. The analytical method can predict reaction rates within 2% of the numerically simulated values for a wide range of oxygen concentrations, overpotentials, and agglomerate sizes, while saving simulation time compared to a fully numerical approach.

  13. Comparison of Low-Energy Lunar Transfer Trajectories to Invariant Manifolds

    NASA Technical Reports Server (NTRS)

    Anderson, Rodney L.; Parker, Jeffrey S.

    2011-01-01

    In this study, transfer trajectories from the Earth to the Moon that encounter the Moon at various flight path angles are examined, and lunar approach trajectories are compared to the invariant manifolds of selected unstable orbits in the circular restricted three-body problem. Previous work focused on lunar impact and landing trajectories encountering the Moon normal to the surface, and this research extends the problem with different flight path angles in three dimensions. The lunar landing geometry for a range of Jacobi constants are computed, and approaches to the Moon via invariant manifolds from unstable orbits are analyzed for different energy levels.

  14. ENTRAPMENT OF PROTEINS IN GLYCOGEN-CAPPED AND HYDRAZIDE-ACTIVATED SUPPORTS

    PubMed Central

    Jackson, Abby J.; Xuan, Hai; Hage, David S.

    2010-01-01

    A method is described for the entrapment of proteins in hydrazide-activated supports using oxidized glycogen as a capping agent. This approach is demonstrated using human serum albumin (HSA) as a model binding agent. After optimization of this method, a protein content of 43 (± 1) mg HSA/g support was obtained for porous silica. The entrapped HSA supports could retain a low mass drug (S-warfarin) and had activities and equilibrium constants comparable to those for soluble HSA. It was also found that this approach could be used with other proteins and binding agents that had masses between 5.8 and 150 kDa. PMID:20470745

  15. An efficient and accurate framework for calculating lattice thermal conductivity of solids: AFLOW—AAPL Automatic Anharmonic Phonon Library

    NASA Astrophysics Data System (ADS)

    Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Carrete, Jesús; Toher, Cormac; de Jong, Maarten; Asta, Mark; Fornari, Marco; Nardelli, Marco Buongiorno; Curtarolo, Stefano

    2017-10-01

    One of the most accurate approaches for calculating lattice thermal conductivity, , is solving the Boltzmann transport equation starting from third-order anharmonic force constants. In addition to the underlying approximations of ab-initio parameterization, two main challenges are associated with this path: high computational costs and lack of automation in the frameworks using this methodology, which affect the discovery rate of novel materials with ad-hoc properties. Here, the Automatic Anharmonic Phonon Library (AAPL) is presented. It efficiently computes interatomic force constants by making effective use of crystal symmetry analysis, it solves the Boltzmann transport equation to obtain , and allows a fully integrated operation with minimum user intervention, a rational addition to the current high-throughput accelerated materials development framework AFLOW. An "experiment vs. theory" study of the approach is shown, comparing accuracy and speed with respect to other available packages, and for materials characterized by strong electron localization and correlation. Combining AAPL with the pseudo-hybrid functional ACBN0 is possible to improve accuracy without increasing computational requirements.

  16. Higher success rate with transcranial electrical stimulation of motor-evoked potentials using constant-voltage stimulation compared with constant-current stimulation in patients undergoing spinal surgery.

    PubMed

    Shigematsu, Hideki; Kawaguchi, Masahiko; Hayashi, Hironobu; Takatani, Tsunenori; Iwata, Eiichiro; Tanaka, Masato; Okuda, Akinori; Morimoto, Yasuhiko; Masuda, Keisuke; Tanaka, Yuu; Tanaka, Yasuhito

    2017-10-01

    During spine surgery, the spinal cord is electrophysiologically monitored via transcranial electrical stimulation of motor-evoked potentials (TES-MEPs) to prevent injury. Transcranial electrical stimulation of motor-evoked potential involves the use of either constant-current or constant-voltage stimulation; however, there are few comparative data available regarding their ability to adequately elicit compound motor action potentials. We hypothesized that the success rates of TES-MEP recordings would be similar between constant-current and constant-voltage stimulations in patients undergoing spine surgery. The objective of this study was to compare the success rates of TES-MEP recordings between constant-current and constant-voltage stimulation. This is a prospective, within-subject study. Data from 100 patients undergoing spinal surgery at the cervical, thoracic, or lumbar level were analyzed. The success rates of the TES-MEP recordings from each muscle were examined. Transcranial electrical stimulation with constant-current and constant-voltage stimulations at the C3 and C4 electrode positions (international "10-20" system) was applied to each patient. Compound muscle action potentials were bilaterally recorded from the abductor pollicis brevis (APB), deltoid (Del), abductor hallucis (AH), tibialis anterior (TA), gastrocnemius (GC), and quadriceps (Quad) muscles. The success rates of the TES-MEP recordings from the right Del, right APB, bilateral Quad, right TA, right GC, and bilateral AH muscles were significantly higher using constant-voltage stimulation than those using constant-current stimulation. The overall success rates with constant-voltage and constant-current stimulations were 86.3% and 68.8%, respectively (risk ratio 1.25 [95% confidence interval: 1.20-1.31]). The success rates of TES-MEP recordings were higher using constant-voltage stimulation compared with constant-current stimulation in patients undergoing spinal surgery. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Nonlinear elastic response of strong solids: First-principles calculations of the third-order elastic constants of diamond

    DOE PAGES

    Hmiel, A.; Winey, J. M.; Gupta, Y. M.; ...

    2016-05-23

    Accurate theoretical calculations of the nonlinear elastic response of strong solids (e.g., diamond) constitute a fundamental and important scientific need for understanding the response of such materials and for exploring the potential synthesis and design of novel solids. However, without corresponding experimental data, it is difficult to select between predictions from different theoretical methods. Recently the complete set of third-order elastic constants (TOECs) for diamond was determined experimentally, and the validity of various theoretical approaches to calculate the same may now be assessed. We report on the use of density functional theory (DFT) methods to calculate the six third-order elasticmore » constants of diamond. Two different approaches based on homogeneous deformations were used: (1) an energy-strain fitting approach using a prescribed set of deformations, and (2) a longitudinal stress-strain fitting approach using uniaxial compressive strains along the [100], [110], and [111] directions, together with calculated pressure derivatives of the second-order elastic constants. The latter approach provides a direct comparison to the experimental results. The TOECs calculated using the energy-strain approach differ significantly from the measured TOECs. In contrast, calculations using the longitudinal stress-uniaxial strain approach show good agreement with the measured TOECs and match the experimental values significantly better than the TOECs reported in previous theoretical studies. Lastly, our results on diamond have demonstrated that, with proper analysis procedures, first-principles calculations can indeed be used to accurately calculate the TOECs of strong solids.« less

  18. Desorption kinetics of hydrophobic organic chemicals from sediment to water: a review of data and models.

    PubMed

    Birdwell, Justin; Cook, Robert L; Thibodeaux, Louis J

    2007-03-01

    Resuspension of contaminated sediment can lead to the release of toxic compounds to surface waters where they are more bioavailable and mobile. Because the timeframe of particle resettling during such events is shorter than that needed to reach equilibrium, a kinetic approach is required for modeling the release process. Due to the current inability of common theoretical approaches to predict site-specific release rates, empirical algorithms incorporating the phenomenological assumption of biphasic, or fast and slow, release dominate the descriptions of nonpolar organic chemical release in the literature. Two first-order rate constants and one fraction are sufficient to characterize practically all of the data sets studied. These rate constants were compared to theoretical model parameters and functionalities, including chemical properties of the contaminants and physical properties of the sorbents, to determine if the trends incorporated into the hindered diffusion model are consistent with the parameters used in curve fitting. The results did not correspond to the parameter dependence of the hindered diffusion model. No trend in desorption rate constants, for either fast or slow release, was observed to be dependent on K(OC) or aqueous solubility for six and seven orders of magnitude, respectively. The same was observed for aqueous diffusivity and sediment fraction organic carbon. The distribution of kinetic rate constant values was approximately log-normal, ranging from 0.1 to 50 d(-1) for the fast release (average approximately 5 d(-1)) and 0.0001 to 0.1 d(-1) for the slow release (average approximately 0.03 d(-1)). The implications of these findings with regard to laboratory studies, theoretical desorption process mechanisms, and water quality modeling needs are presented and discussed.

  19. Theoretical and Experimental Investigations on Droplet Evaporation and Droplet Ignition at High Pressures

    NASA Technical Reports Server (NTRS)

    Ristau, R.; Nagel, U.; Iglseder, H.; Koenig, J.; Rath, H. J.; Normura, H.; Kono, M.; Tanabe, M.; Sato, J.

    1993-01-01

    The evaporation of fuel droplets under high ambient pressure and temperature in normal gravity and microgravity has been investigated experimentally. For subcritical ambient conditions, droplet evaporation after a heat-up period follows the d(exp 2)-law. For all data the evaporation constant increases as the ambient temperature increases. At identical ambient conditions the evaporation constant under microgravity is smaller compared to normal gravity. This effect can first be observed at 1 bar and increases with ambient pressure. Preliminary experiments on ignition delay for self-igniting fuel droplets have been performed. Above a 1 s delay time, at identical ambient conditions, significant differences in the results of the normal and microgravity data are observed. Self-ignition occurs within different temperature ranges due to the influence of gravity. The time dependent behavior of the droplet is examined theoretically. In the calculations two different approaches for the gas phase are applied. In the first approach the conditions at the interface are given using a quasi steady theory approximation. The second approach uses a set of time dependent governing equations for the gas phase which are then evaluated. In comparison, the second model shows a better agreement with the drop tower experiments. In both cases a time dependent gasification rate is observed.

  20. Molecular dynamics for near melting temperatures simulations of metals using modified embedded-atom method

    NASA Astrophysics Data System (ADS)

    Etesami, S. Alireza; Asadi, Ebrahim

    2018-01-01

    Availability of a reliable interatomic potential is one of the major challenges in utilizing molecular dynamics (MD) for simulations of metals at near the melting temperatures and melting point (MP). Here, we propose a novel approach to address this challenge in the concept of modified-embedded-atom (MEAM) interatomic potential; also, we apply the approach on iron, nickel, copper, and aluminum as case studies. We propose adding experimentally available high temperature elastic constants and MP of the element to the list of typical low temperature properties used for the development of MD interatomic potential parameters. We show that the proposed approach results in a reasonable agreement between the MD calculations of melting properties such as latent heat, expansion in melting, liquid structure factor, and solid-liquid interface stiffness and their experimental/computational counterparts. Then, we present the physical properties of mentioned elements near melting temperatures using the new MEAM parameters. We observe that the behavior of elastic constants, heat capacity and thermal linear expansion coefficient at room temperature compared to MP follows an empirical linear relation (α±β × MP) for transition metals. Furthermore, a linear relation between the tetragonal shear modulus and the enthalpy change from room temperature to MP is observed for face-centered cubic materials.

  1. Analysis of Tube Free Hydroforming using an Inverse Approach with FLD-based Adjustment of Process Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Johnson, Kenneth I.; Khaleel, Mohammad A.

    2003-04-01

    This paper employs an inverse approach (IA) formulation for the analysis of tubes under free hydroforming conditions. The IA formulation is derived from that of Guo et al. established for flat sheet hydroforming analysis using constant strain triangular membrane elements. At first, an incremental analysis of free hydroforming for a hot-dip galvanized (HG/Z140) DP600 tube is performed using the finite element Marc code. The deformed geometry obtained at the last converged increment is then used as the final configuration in the inverse analysis. This comparative study allows us to assess the predicting capability of the inverse analysis. The results willmore » be compared with the experimental values determined by Asnafi and Skogsgardh. After that, a procedure based on a forming limit diagram (FLD) is proposed to adjust the process parameters such as the axial feed and internal pressure. Finally, the adjustment process is illustrated through a re-analysis of the same tube using the inverse approach« less

  2. A Fresh Look at Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    ERIC Educational Resources Information Center

    Camporesi, Roberto

    2016-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as…

  3. Precisely and Accurately Inferring Single-Molecule Rate Constants

    PubMed Central

    Kinz-Thompson, Colin D.; Bailey, Nevette A.; Gonzalez, Ruben L.

    2017-01-01

    The kinetics of biomolecular systems can be quantified by calculating the stochastic rate constants that govern the biomolecular state versus time trajectories (i.e., state trajectories) of individual biomolecules. To do so, the experimental signal versus time trajectories (i.e., signal trajectories) obtained from observing individual biomolecules are often idealized to generate state trajectories by methods such as thresholding or hidden Markov modeling. Here, we discuss approaches for idealizing signal trajectories and calculating stochastic rate constants from the resulting state trajectories. Importantly, we provide an analysis of how the finite length of signal trajectories restrict the precision of these approaches, and demonstrate how Bayesian inference-based versions of these approaches allow rigorous determination of this precision. Similarly, we provide an analysis of how the finite lengths and limited time resolutions of signal trajectories restrict the accuracy of these approaches, and describe methods that, by accounting for the effects of the finite length and limited time resolution of signal trajectories, substantially improve this accuracy. Collectively, therefore, the methods we consider here enable a rigorous assessment of the precision, and a significant enhancement of the accuracy, with which stochastic rate constants can be calculated from single-molecule signal trajectories. PMID:27793280

  4. Free energy perturbation method for measuring elastic constants of liquid crystals

    NASA Astrophysics Data System (ADS)

    Joshi, Abhijeet

    There is considerable interest in designing liquid crystals capable of yielding specific morphological responses in confined environments, including capillaries and droplets. The morphology of a liquid crystal is largely dictated by the elastic constants, which are difficult to measure and are only available for a handful of substances. In this work, a first-principles based method is proposed to calculate the Frank elastic constants of nematic liquid crystals directly from atomistic models. These include the standard splay, twist and bend deformations, and the often-ignored but important saddle-splay constant. The proposed method is validated using a well-studied Gay-Berne(3,5,2,1) model; we examine the effects of temperature and system size on the elastic constants in the nematic and smectic phases. We find that our measurements of splay, twist, and bend elastic constants are consistent with previous estimates for the nematic phase. We further outline the implementation of our approach for the saddle-splay elastic constant, and find it to have a value at the limits of the Ericksen inequalities. We then proceed to report results for the elastic constants commonly known liquid crystals namely 4-pentyl-4'-cynobiphenyl (5CB) using atomistic model, and show that the values predicted by our approach are consistent with a subset of the available but limited experimental literature.

  5. A drift line bias estimator: ARMA-based filter or calibration method, and its application in BDS/GPS-based attitude determination

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Yanqing, Hou; Jie, Wu

    2016-12-01

    The multi-antenna synchronized receiver (using a common clock) is widely applied in GNSS-based attitude determination (AD) or terrain deformations monitoring, and many other applications, since the high-accuracy single-differenced carrier phase can be used to improve the positioning or AD accuracy. Thus, the line bias (LB) parameter (fractional bias isolating) should be calibrated in the single-differenced phase equations. In the past decades, all researchers estimated the LB as a constant parameter in advance and compensated it in real time. However, the constant LB assumption is inappropriate in practical applications because of the physical length and permittivity changes of the cables, caused by the environmental temperature variation and the instability of receiver-self inner circuit transmitting delay. Considering the LB drift (or colored LB) in practical circumstances, this paper initiates a real-time estimator using auto regressive moving average-based (ARMA) prediction/whitening filter model or Moving average-based (MA) constant calibration model. In the ARMA-based filter model, four cases namely AR(1), ARMA(1, 1), AR(2) and ARMA(2, 1) are applied for the LB prediction. The real-time relative positioning model using the ARMA-based predicting LB is derived and it is theoretically proved that the positioning accuracy is better than the traditional double difference carrier phase (DDCP) model. The drifting LB is defined with a phase temperature changing rate integral function, which is a random walk process if the phase temperature changing rate is white noise, and is validated by the analysis of the AR model coefficient. The auto covariance function shows that the LB is indeed varying in time and estimating it as a constant is not safe, which is also demonstrated by the analysis on LB variation of each visible satellite during a zero and short baseline BDS/GPS experiment. Compared to the DDCP approach, in the zero-baseline experiment, the LB constant calibration (LBCC) and MA approaches improved the positioning accuracy of the vertical component, while slightly degrading the accuracy of the horizontal components. The ARMA(1, 0) model, however, improved the positioning accuracy of all three components, with 40 and 50 % improvement of the vertical component for BDS and GPS, respectively. In the short baseline experiment, compared to the DDCP approach, the LBCC approach yielded bad positioning solutions and degraded the AD accuracy; both MA and ARMA-based filter approaches improved the AD accuracy. Moreover, the ARMA(1, 0) and ARMA(1, 1) models have relatively better performance, improving to 55 % and 48 % the elevation angle in ARMA(1, 1) and MA model for GPS, respectively. Furthermore, the drifting LB variation is found to be continuous and slowly cumulative; the variation magnitudes in the unit of length are almost identical on different frequency carrier phases, so the LB variation does not show obvious correlation between different frequencies. Consequently, the wide-lane LB in the unit of cycle is very stable, while the narrow-lane LB varies largely in time. This reasoning probably also explains the phenomenon that the wide-lane LB originating in the satellites is stable, while the narrow-lane LB varies. The results of ARMA-based filters are better than the MA model, which probably implies that the modeling for drifting LB can further improve the precise point positioning accuracy.

  6. A New Method of Comparing Forcing Agents in Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kravitz, Benjamin S.; MacMartin, Douglas; Rasch, Philip J.

    We describe a new method of comparing different climate forcing agents (e.g., CO2, CH4, and solar irradiance) that avoids many of the ambiguities introduced by temperature-related climate feedbacks. This is achieved by introducing an explicit feedback loop external to the climate model that adjusts one forcing agent to balance another while keeping global mean surface temperature constant. Compared to current approaches, this method has two main advantages: (i) the need to define radiative forcing is bypassed and (ii) by maintaining roughly constant global mean temperature, the effects of state dependence on internal feedback strengths are minimized. We demonstrate this approachmore » for several different forcing agents and derive the relationships between these forcing agents in two climate models; comparisons between forcing agents are highly linear in concordance with predicted functional forms. Transitivity of the relationships between the forcing agents appears to hold within a wide range of forcing. The relationships between the forcing agents obtained from this method are consistent across both models but differ from relationships that would be obtained from calculations of radiative forcing, highlighting the importance of controlling for surface temperature feedback effects when separating radiative forcing and climate response.« less

  7. Elastic constants of stressed and unstressed materials in the phase-field crystal model

    NASA Astrophysics Data System (ADS)

    Wang, Zi-Le; Huang, Zhi-Feng; Liu, Zhirong

    2018-04-01

    A general procedure is developed to investigate the elastic response and calculate the elastic constants of stressed and unstressed materials through continuum field modeling, particularly the phase-field crystal (PFC) models. It is found that for a complete description of system response to elastic deformation, the variations of all the quantities of lattice wave vectors, their density amplitudes (including the corresponding anisotropic variation and degeneracy breaking), the average atomic density, and system volume should be incorporated. The quantitative and qualitative results of elastic constant calculations highly depend on the physical interpretation of the density field used in the model, and also importantly, on the intrinsic pressure that usually pre-exists in the model system. A formulation based on thermodynamics is constructed to account for the effects caused by constant pre-existing stress during the homogeneous elastic deformation, through the introducing of a generalized Gibbs free energy and an effective finite strain tensor used for determining the elastic constants. The elastic properties of both solid and liquid states can be well produced by this unified approach, as demonstrated by an analysis for the liquid state and numerical evaluations for the bcc solid phase. The numerical calculations of bcc elastic constants and Poisson's ratio through this method generate results that are consistent with experimental conditions, and better match the data of bcc Fe given by molecular dynamics simulations as compared to previous work. The general theory developed here is applicable to the study of different types of stressed or unstressed material systems under elastic deformation.

  8. A comparative modeling study of a dual tracer experiment in a large lysimeter under atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.

    2009-09-01

    SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.

  9. Effect of Sodium Chloride on α-Dicarbonyl Compound and 5-Hydroxymethyl-2-furfural Formations from Glucose under Caramelization Conditions: A Multiresponse Kinetic Modeling Approach.

    PubMed

    Kocadağlı, Tolgahan; Gökmen, Vural

    2016-08-17

    This study aimed to investigate the kinetics of α-dicarbonyl compound formation in glucose and glucose-sodium chloride mixture during heating under caramelization conditions. Changes in the concentrations of glucose, fructose, glucosone, 1-deoxyglucosone, 3-deoxyglucosone, 3,4-dideoxyglucosone, 5-hydroxymethyl-2-furfural (HMF), glyoxal, methylglyoxal, and diacetyl were determined. A comprehensive reaction network was built, and the multiresponse model was compared to the experimentally observed data. Interconversion between glucose and fructose became 2.5 times faster in the presence of NaCl at 180 and 200 °C. The effect of NaCl on the rate constants of α-dicarbonyl compound formation varied across the precursor and the compound itself and temperature. A decrease in rate constants of 3-deoxyglucosone and 1-deoxyglucosone formations by the presence of NaCl was observed. HMF formation was revealed to be mainly via isomerization to fructose and dehydration over cyclic intermediates, and the rate constants increase 4-fold in the presence of NaCl.

  10. Evaluation of constant-Weber-number scaling for icing tests

    NASA Technical Reports Server (NTRS)

    Anderson, David N.

    1996-01-01

    Previous studies showed that for conditions simulating an aircraft encountering super-cooled water droplets the droplets may splash before freezing. Other surface effects dependent on the water surface tension may also influence the ice accretion process. Consequently, the Weber number appears to be important in accurately scaling ice accretion. A scaling method which uses a constant-Weber-number approach has been described previously; this study provides an evaluation of this scaling method. Tests are reported on cylinders of 2.5 to 15-cm diameter and NACA 0012 airfoils with chords of 18 to 53 cm in the NASA Lewis Icing Research Tunnel (IRT). The larger models were used to establish reference ice shapes, the scaling method was applied to determine appropriate scaled test conditions using the smaller models, and the ice shapes were compared. Icing conditions included warm glaze, horn glaze and mixed. The smallest size scaling attempted was 1/3, and scale and reference ice shapes for both cylinders and airfoils indicated that the constant-Weber-number scaling method was effective for the conditions tested.

  11. Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Osler, John C

    2010-12-01

    This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.

  12. Tunneling in quantum cosmology and holographic SYM theory

    NASA Astrophysics Data System (ADS)

    Ghoroku, Kazuo; Nakano, Yoshimasa; Tachibana, Motoi; Toyoda, Fumihiko

    2018-03-01

    We study the time evolution of the early Universe, which is developed by a cosmological constant Λ4 and supersymmetric Yang-Mills (SYM) fields in the Friedmann-Robertson-Walker space-time. The renormalized vacuum expectation value of the energy-momentum tensor of the SYM theory is obtained in a holographic way. It includes a radiation of the SYM field, parametrized as C . The evolution is controlled by this radiation C and the cosmological constant Λ4. For positive Λ4, an inflationary solution is obtained at late time. When C is added, the quantum mechanical situation at early time is fairly changed. Here we perform the early time analysis in terms of two different approaches, (i) the Wheeler-DeWitt equation and (ii) Lorentzian path integral with the Picard-Lefschetz method by introducing an effective action. The results of two methods are compared.

  13. Characterization of Forest Opacity Using Multi-Angular Emission and Backscatter Data

    NASA Technical Reports Server (NTRS)

    Kurum, Mehmet; O'Neill, Peggy; Lang, Roger H.; Joseph, Alicia T.; Cosh, Michael H.; Jackson, Thomas J.

    2010-01-01

    This paper discusses the results from a series of field experiments using ground-based L-band microwave active/passive sensors. Three independent approaches are employed to the microwave data to determine vegetation opacity of coniferous trees. First, a zero-order radiative transfer model is fitted to multi-angular microwave emissivity data in a least-square sense to provide "effective" vegetation optical depth. Second, a ratio between radar backscatter measurements with the corner reflector under trees and in an open area is calculated to obtain "measured" tree propagation characteristics. Finally, the "theoretical" propagation constant is determined by forward scattering theorem using detailed measurements of size/angle distributions and dielectric constants of the tree constituents (trunk, branches, and needles). The results indicate that "effective" values underestimate attenuation values compared to both "theoretical" and "measured" values.

  14. Spectral emissivities and optical constants of electromagnetically levitated liquid metals as functions of temperature and wavelength

    NASA Technical Reports Server (NTRS)

    Krishnan, S.; Hauge, R. H.; Margrave, J. L.

    1989-01-01

    The development of a noncontact temperature measurement device utilizing rotating analyzer ellipsometry is described. The technique circumvents the necessity of spectral emissivity estimation by direct measurement concomittant with radiance brightness. Using this approach, the optical properties of electromagnetically levitated liquid metals Cu, Ag, Au, Ni, Pd, Pt, and Zr were measured in situ at four wavelengths and up to 600 K superheat in the liquid. The data suggest an increase in the emissivity of the liquid compared with the incandescent solid. The data also show moderate temperature dependence of the spectral emissivity. A few measurements of the optical properties of undercooled liquid metals were also conducted. The data for both solids and liquids show excellent agreement with available values in the literature for the spectral emissivities as well as the optical constants.

  15. Fast backprojection-based reconstruction of spectral-spatial EPR images from projections with the constant sweep of a magnetic field.

    PubMed

    Komarov, Denis A; Hirata, Hiroshi

    2017-08-01

    In this paper, we introduce a procedure for the reconstruction of spectral-spatial EPR images using projections acquired with the constant sweep of a magnetic field. The application of a constant field-sweep and a predetermined data sampling rate simplifies the requirements for EPR imaging instrumentation and facilitates the backprojection-based reconstruction of spectral-spatial images. The proposed approach was applied to the reconstruction of a four-dimensional numerical phantom and to actual spectral-spatial EPR measurements. Image reconstruction using projections with a constant field-sweep was three times faster than the conventional approach with the application of a pseudo-angle and a scan range that depends on the applied field gradient. Spectral-spatial EPR imaging with a constant field-sweep for data acquisition only slightly reduces the signal-to-noise ratio or functional resolution of the resultant images and can be applied together with any common backprojection-based reconstruction algorithm. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Determining the Optimal Values of Exponential Smoothing Constants--Does Solver Really Work?

    ERIC Educational Resources Information Center

    Ravinder, Handanhal V.

    2013-01-01

    A key issue in exponential smoothing is the choice of the values of the smoothing constants used. One approach that is becoming increasingly popular in introductory management science and operations management textbooks is the use of Solver, an Excel-based non-linear optimizer, to identify values of the smoothing constants that minimize a measure…

  17. A Simple Method to Calculate the Temperature Dependence of the Gibbs Energy and Chemical Equilibrium Constants

    ERIC Educational Resources Information Center

    Vargas, Francisco M.

    2014-01-01

    The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…

  18. Estimation of variance in Cox's regression model with shared gamma frailties.

    PubMed

    Andersen, P K; Klein, J P; Knudsen, K M; Tabanera y Palacios, R

    1997-12-01

    The Cox regression model with a shared frailty factor allows for unobserved heterogeneity or for statistical dependence between the observed survival times. Estimation in this model when the frailties are assumed to follow a gamma distribution is reviewed, and we address the problem of obtaining variance estimates for regression coefficients, frailty parameter, and cumulative baseline hazards using the observed nonparametric information matrix. A number of examples are given comparing this approach with fully parametric inference in models with piecewise constant baseline hazards.

  19. On Adaptive Cell-Averaging CFAR (Constant False-Alarm Rate) Radar Signal Detection

    DTIC Science & Technology

    1987-10-01

    SIICILE COPY 4 F FInI Tedwill Rlmrt to October 197 00 C\\JT ON ADAPTIVE CELL-AVERA81NG CFAR I RADAR SIGNAL DETECTION Syracuse University Mourud krket...NY 13441-5700 ELEMENT NO. NO. NO ACCESSION NO. 11. TITLE (Include Security Classification) 61102F 2’ 05 J8 PD - ON ADAPTIVE CELL-AVERAGING CFAR RADAR... CFAR ). One approach to adaptive detection in nonstationary noise and clutter background is to compare the processed target signal to an adaptive

  20. Removal of Surface-Reflected Light for the Measurement of Remote-Sensing Reflectance from an Above-Surface Platform

    DTIC Science & Technology

    2010-12-01

    remote - sensing reflectance) can be highly inaccurate if a spectrally constant value is applied (although errors can be reduced by carefully filtering measured raw data). To remove surface-reflected light in field measurements of remote sensing reflectance, a spectral optimization approach was applied, with results compared with those from remote sensing models and from direct measurements. The agreement from different determinations suggests that reasonable results for remote sensing reflectance of clear

  1. Removal of Surface-Reflected Light for the Measurement of Remote-Sensing Reflectance from an Above-Surface Platform

    DTIC Science & Technology

    2010-12-06

    remote - sensing reflectance) can be highly inaccurate if a spectrally constant value is applied (although errors can be reduced by carefully filtering measured raw data). To remove surface-reflected light in field measurements of remote sensing reflectance, a spectral optimization approach was applied, with results compared with those from remote sensing models and from direct measurements. The agreement from different determinations suggests that reasonable results for remote sensing reflectance of clear

  2. Plasma-based wakefield accelerators as sources of axion-like particles

    NASA Astrophysics Data System (ADS)

    Burton, David A.; Noble, Adam

    2018-03-01

    We estimate the average flux density of minimally-coupled axion-like particles (ALPs) generated by a laser-driven plasma wakefield propagating along a constant strong magnetic field. Our calculations suggest that a terrestrial source based on this approach could generate a pulse of ALPs whose flux density is comparable to that of solar ALPs at Earth. This mechanism is optimal for ALPs with mass in the range of interest of contemporary experiments designed to detect dark matter using microwave cavities.

  3. Verification of a Viscous Computational Aeroacoustics Code using External Verification Analysis

    NASA Technical Reports Server (NTRS)

    Ingraham, Daniel; Hixon, Ray

    2015-01-01

    The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.

  4. Verification of a Viscous Computational Aeroacoustics Code Using External Verification Analysis

    NASA Technical Reports Server (NTRS)

    Ingraham, Daniel; Hixon, Ray

    2015-01-01

    The External Verification Analysis approach to code verification is extended to solve the three-dimensional Navier-Stokes equations with constant properties, and is used to verify a high-order computational aeroacoustics (CAA) code. After a brief review of the relevant literature, the details of the EVA approach are presented and compared to the similar Method of Manufactured Solutions (MMS). Pseudocode representations of EVA's algorithms are included, along with the recurrence relations needed to construct the EVA solution. The code verification results show that EVA was able to convincingly verify a high-order, viscous CAA code without the addition of MMS-style source terms, or any other modifications to the code.

  5. Emotional loneliness in sexual murderers: a qualitative analysis.

    PubMed

    Milsom, Jacci; Beech, Anthony R; Webster, Stephen D

    2003-10-01

    This study compared levels of emotional loneliness between sexual murderers and rapists who had not gone on to kill their victim/s. All participants were life-sentenced prisoners in the United Kingdom. Assessment consisted of a semistructured interview and was subjected to grounded theory analysis. This approach is defined as the breaking down, naming, comparing, and categorizing of data. As such, it is distinguished from other qualitative methods by the process of constant comparison. This continual sifting and comparing elements assists in promoting conceptual and theoretical development. The results of this process found that sexual murderers, compared to rapists, reported significantly higher levels of grievance towards females in childhood, significantly higher levels of peer group loneliness in adolescence, and significantly higher levels of self as victim in adulthood.

  6. A fresh look at linear ordinary differential equations with constant coefficients. Revisiting the impulsive response method using factorization

    NASA Astrophysics Data System (ADS)

    Camporesi, Roberto

    2016-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The approach presented here can be used in a first course on differential equations for science and engineering majors.

  7. Predicting dynamics and rheology of blood flow: A comparative study of multiscale and low-dimensional models of red blood cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Wenxiao; Fedosov, Dmitry A.; Caswell, Bruce

    In this work we compare the predictive capability of two mathematical models for red blood cells (RBCs) focusing on blood flow in capillaries and arterioles. Both RBC models as well as their corresponding blood flows are based on the dissipative particle dynamics (DPD) method, a coarse-grained molecular dynamics approach. The first model employs a multiscale description of the RBC (MS-RBC), with its membrane represented by hundreds or even thousands of DPD-particles connected by springs into a triangular network in combination with out-of-plane elastic bending resistance. Extra dissipation within the network accounts for membrane viscosity, while the characteristic biconcave RBC shapemore » is achieved by imposition of constraints for constant membrane area and constant cell volume. The second model is based on a low-dimensional description (LD-RBC) constructed as a closed torus-like ring of only 10 large DPD colloidal particles. They are connected into a ring by worm-like chain (WLC) springs combined with bending resistance. The LD-RBC model can be fitted to represent the entire range of nonlinear elastic deformations as measured by optical-tweezers for healthy and for infected RBCs in malaria. MS-RBCs suspensions model the dynamics and rheology of blood flow accurately for any size vessel but this approach is computationally expensive above 100 microns. Surprisingly, the much more economical suspensions of LD-RBCs also capture the blood flow dynamics and rheology accurately except for vessels with sizes comparable to RBC diameter. In particular, the LD-RBC suspensions are shown to properly capture the experimental data for the apparent viscosity of blood and its cell-free layer (CFL) in tube flow. Taken together, these findings suggest a hierarchical approach in modeling blood flow in the arterial tree, whereby the MS-RBC model should be employed for capillaries and arterioles below 100 microns, the LD-RBC model for arterioles, and the continuum description for arteries.« less

  8. Evaluation and comparison of diffusion MR methods for measuring apparent transcytolemmal water exchange rate constant

    NASA Astrophysics Data System (ADS)

    Tian, Xin; Li, Hua; Jiang, Xiaoyu; Xie, Jingping; Gore, John C.; Xu, Junzhong

    2017-02-01

    Two diffusion-based approaches, CG (constant gradient) and FEXI (filtered exchange imaging) methods, have been previously proposed for measuring transcytolemmal water exchange rate constant kin, but their accuracy and feasibility have not been comprehensively evaluated and compared. In this work, both computer simulations and cell experiments in vitro were performed to evaluate these two methods. Simulations were done with different cell diameters (5, 10, 20 μm), a broad range of kin values (0.02-30 s-1) and different SNR's, and simulated kin's were directly compared with the ground truth values. Human leukemia K562 cells were cultured and treated with saponin to selectively change cell transmembrane permeability. The agreement between measured kin's of both methods was also evaluated. The results suggest that, without noise, the CG method provides reasonably accurate estimation of kin especially when it is smaller than 10 s-1, which is in the typical physiological range of many biological tissues. However, although the FEXI method overestimates kin even with corrections for the effects of extracellular water fraction, it provides reasonable estimates with practical SNR's and more importantly, the fitted apparent exchange rate AXR showed approximately linear dependence on the ground truth kin. In conclusion, either CG or FEXI method provides a sensitive means to characterize the variations in transcytolemmal water exchange rate constant kin, although the accuracy and specificity is usually compromised. The non-imaging CG method provides more accurate estimation of kin, but limited to large volume-of-interest. Although the accuracy of FEXI is compromised with extracellular volume fraction, it is capable of spatially mapping kin in practice.

  9. Quantitative determination of band distortions in diamond attenuated total reflectance infrared spectra.

    PubMed

    Boulet-Audet, Maxime; Buffeteau, Thierry; Boudreault, Simon; Daugey, Nicolas; Pézolet, Michel

    2010-06-24

    Due to its unmatched hardness and chemical inertia, diamond offers many advantages over other materials for extreme conditions and routine analysis by attenuated total reflection (ATR) infrared spectroscopy. Its low refractive index can offer up to a 6-fold absorbance increase compared to germanium. Unfortunately, it also results for strong bands in spectral distortions compared to transmission experiments. The aim of this paper is to present a methodological approach to determine quantitatively the degree of the spectral distortions in ATR spectra. This approach requires the determination of the optical constants (refractive index and extinction coefficient) of the investigated sample. As a typical example, the optical constants of the fibroin protein of the silk worm Bombyx mori have been determined from the polarized ATR spectra obtained using both diamond and germanium internal reflection elements. The positions found for the amide I band by germanium and diamond ATR are respectively 6 and 17 cm(-1) lower than the true value dtermined from the k(nu) spectrum, which is calculated to be 1659 cm(-1). To determine quantitatively the effect of relevant parameters such as the film thickness and the protein concentration, various spectral simulations have also been performed. The use of a thinner film probed by light polarized in the plane of incidence and diluting the protein sample can help in obtaining ATR spectra that are closer to their transmittance counterparts. To extend this study to any system, the ATR distortion amplitude has been evaluated using spectral simulations performed for bands of various intensities and widths. From these simulations, a simple empirical relationship has been found to estimate the band shift from the experimental band height and width that could be of practical use for ATR users. This paper shows that the determination of optical constants provides an efficient way to recover the true spectrum shape and band frequencies of distorted ATR spectra.

  10. Universality of P - V criticality in horizon thermodynamics

    NASA Astrophysics Data System (ADS)

    Hansen, Devin; Kubizňák, David; Mann, Robert B.

    2017-01-01

    We study P - V criticality of black holes in Lovelock gravities in the context of horizon thermodynamics. The corresponding first law of horizon thermodynamics emerges as one of the Einstein-Lovelock equations and assumes the universal (independent of matter content) form δ E = T δ S - P δ V , where P is identified with the total pressure of all matter in the spacetime (including a cosmological constant Λ if present). We compare this approach to recent advances in extended phase space thermodynamics of asymptotically AdS black holes where the `standard' first law of black hole thermodynamics is extended to include a pressure-volume term, where the pressure is entirely due to the (variable) cosmological constant. We show that both approaches are quite different in interpretation. Provided there is sufficient non-linearity in the gravitational sector, we find that horizon thermodynamics admits the same interesting black hole phase behaviour seen in the extended case, such as a Hawking-Page transition, Van der Waals like behaviour, and the presence of a triple point. We also formulate the Smarr formula in horizon thermodynamics and discuss the interpretation of the quantity E appearing in the horizon first law.

  11. GPS vertical axis performance enhancement for helicopter precision landing approach

    NASA Technical Reports Server (NTRS)

    Denaro, Robert P.; Beser, Jacques

    1986-01-01

    Several areas were investigated for improving vertical accuracy for a rotorcraft using the differential Global Positioning System (GPS) during a landing approach. Continuous deltaranging was studied and the potential improvement achieved by estimating acceleration was studied by comparing the performance on a constant acceleration turn and a rough landing profile of several filters: a position-velocity (PV) filter, a position-velocity-constant acceleration (PVAC) filter, and a position-velocity-turning acceleration (PVAT) filter. In overall statistics, the PVAC filter was found to be most efficient with the more complex PVAT performing equally well. Vertical performance was not significantly different among the filters. Satellite selection algorithms based on vertical errors only (vertical dilution of precision or VDOP) and even-weighted cross-track and vertical errors (XVDOP) were tested. The inclusion of an altimeter was studied by modifying the PVAC filter to include a baro bias estimate. Improved vertical accuracy during degraded DOP conditions resulted. Flight test results for raw differential results excluding filter effects indicated that the differential performance significantly improved overall navigation accuracy. A landing glidepath steering algorithm was devised which exploits the flexibility of GPS in determining precise relative position. A method for propagating the steering command over the GPS update interval was implemented.

  12. Model based estimation of sediment erosion in groyne fields along the River Elbe

    NASA Astrophysics Data System (ADS)

    Prohaska, Sandra; Jancke, Thomas; Westrich, Bernhard

    2008-11-01

    River water quality is still a vital environmental issue, even though ongoing emissions of contaminants are being reduced in several European rivers. The mobility of historically contaminated deposits is key issue in sediment management strategy and remediation planning. Resuspension of contaminated sediments impacts the water quality and thus, it is important for river engineering and ecological rehabilitation. The erodibility of the sediments and associated contaminants is difficult to predict due to complex time depended physical, chemical, and biological processes, as well as due to the lack of information. Therefore, in engineering practice the values for erosion parameters are usually assumed to be constant despite their high spatial and temporal variability, which leads to a large uncertainty of the erosion parameters. The goal of presented study is to compare the deterministic approach assuming constant critical erosion shear stress and an innovative approach which takes the critical erosion shear stress as a random variable. Furthermore, quantification of the effective value of the critical erosion shear stress, its applicability in numerical models, and erosion probability will be estimated. The results presented here are based on field measurements and numerical modelling of the River Elbe groyne fields.

  13. Ring rotational speed trend analysis by FEM approach in a Ring Rolling process

    NASA Astrophysics Data System (ADS)

    Allegri, G.; Giorleo, L.; Ceretti, E.

    2018-05-01

    Ring Rolling is an advanced local incremental forming technology to fabricate directly precise seamless ring-shape parts with various dimensions and materials. In this process two different deformations occur in order to reduce the width and the height of a preform hollow ring; as results a diameter expansion is obtained. In order to guarantee a uniform deformation, the preform is forced toward the Driver Roll whose aim is to transmit the rotation to the ring. The ring rotational speed selection is fundamental because the higher is the speed the higher will be the axial symmetry of the deformation process. However, it is important to underline that the rotational speed will affect not only the final ring geometry but also the loads and energy needed to produce it. Despite this importance in industrial environment, usually, a constant value for the Driver Roll angular velocity is set so to result in a decreasing trend law for the ring rotational speed. The main risk due to this approach is not fulfilling the axial symmetric constrain (due to the diameter expansion) and to generate a high localized ring section deformation. In order to improve the knowledge about this topic in the present paper three different ring rotational speed trends (constant, linearly increasing and linearly decreasing) were investigated by FEM approach. Results were compared in terms of geometrical and dimensional analysis, loads and energies required.

  14. Unscaled Bayes factors for multiple hypothesis testing in microarray experiments.

    PubMed

    Bertolino, Francesco; Cabras, Stefano; Castellanos, Maria Eugenia; Racugno, Walter

    2015-12-01

    Multiple hypothesis testing collects a series of techniques usually based on p-values as a summary of the available evidence from many statistical tests. In hypothesis testing, under a Bayesian perspective, the evidence for a specified hypothesis against an alternative, conditionally on data, is given by the Bayes factor. In this study, we approach multiple hypothesis testing based on both Bayes factors and p-values, regarding multiple hypothesis testing as a multiple model selection problem. To obtain the Bayes factors we assume default priors that are typically improper. In this case, the Bayes factor is usually undetermined due to the ratio of prior pseudo-constants. We show that ignoring prior pseudo-constants leads to unscaled Bayes factor which do not invalidate the inferential procedure in multiple hypothesis testing, because they are used within a comparative scheme. In fact, using partial information from the p-values, we are able to approximate the sampling null distribution of the unscaled Bayes factor and use it within Efron's multiple testing procedure. The simulation study suggests that under normal sampling model and even with small sample sizes, our approach provides false positive and false negative proportions that are less than other common multiple hypothesis testing approaches based only on p-values. The proposed procedure is illustrated in two simulation studies, and the advantages of its use are showed in the analysis of two microarray experiments. © The Author(s) 2011.

  15. Cosmic acceleration in the nonlocal approach to the cosmological constant problem

    NASA Astrophysics Data System (ADS)

    Oda, Ichiro

    2018-04-01

    We have recently constructed a manifestly local formulation of a nonlocal approach to the cosmological constant problem which can treat with quantum effects from both matter and gravitational fields. In this formulation, it has been explicitly shown that the effective cosmological constant is radiatively stable even in the presence of the gravitational loop effects. Since we are naturally led to add the R^2 term and the corresponding topological action to an original action, we make use of this formulation to account for the late-time acceleration of expansion of the universe in case of the open universes with infinite space-time volume. We will see that when the "scalaron", which exists in the R^2 gravity as an extra scalar field, has a tiny mass of the order of magnitude {O}(1 meV), we can explain the current value of the cosmological constant in a consistent manner.

  16. Preliminary Planck constant measurements via UME oscillating magnet Kibble balance

    NASA Astrophysics Data System (ADS)

    Ahmedov, H.; Babayiğit Aşkın, N.; Korutlu, B.; Orhan, R.

    2018-06-01

    The UME Kibble balance project was initiated in the second half of 2014. During this period we have studied the theoretical aspects of Kibble balances, in which an oscillating magnet generates AC Faraday’s voltage in a stationary coil, and constructed a trial version to implement this idea. The remarkable feature of this approach is that it can establish the link between the Planck constant and a macroscopic mass by one single experiment in the most natural way. Weak dependences on variations of environmental and experimental conditions, small size, and other useful features offered by this novel approach reduce the complexity of the experimental set-up. This paper describes the principles of the oscillating magnet Kibble balance and gives details of the preliminary Planck constant measurements. The value of the Planck constant determined with our apparatus is \\boldsymbol{h}/{{\\boldsymbol{h}}\\boldsymbol 90}={1}{.000} {004}~ , with a relative standard uncertainty of 6 ppm.

  17. Molecular basis of LFER. Modeling of the electronic substituent effect using fragment quantum self-similarity measures.

    PubMed

    Gironés, Xavier; Carbó-Dorca, Ramon; Ponec, Robert

    2003-01-01

    A new approach allowing the theoretical modeling of the electronic substituent effect is proposed. The approach is based on the use of fragment Quantum Self-Similarity Measures (MQS-SM) calculated from domain averaged Fermi Holes as new theoretical descriptors allowing for the replacement of Hammett sigma constants in QSAR models. To demonstrate the applicability of this new approach its formalism was applied to the description of the substituent effect on the dissociation of a broad series of meta and para substituted benzoic acids. The accuracy and the predicting power of this new approach was tested on the comparison with a recent exhaustive study by Sullivan et al. It has been shown that the accuracy and the predicting power of both procedures is comparable, but, in contrast to a five-parameter correlation equation necessary to describe the data in the study, our approach is more simple and, in fact, only a simple one-parameter correlation equation is required.

  18. A nuclear data approach for the Hubble constant measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritychenko, B.

    2015-06-09

    An extraordinary number of Hubble constant measurements challenges physicists with selection of the best numerical value. The standard U.S. Nuclear Data Program (USNDP) codes and procedures have been applied to resolve this issue. The nuclear data approach has produced the most probable or recommended Hubble constant value of 67.00(770) (km/sec)/Mpc. This recommended value is based on the last 25 years of experimental research and includes contributions from different types of measurements. The present result implies (14.6±1.7) x 10 9 years as a rough estimate for the age of the Universe. The complete list of recommended results is given and possiblemore » implications are discussed.« less

  19. Description of bipolar charge transport in polyethylene using a fluid model with a constant mobility: model prediction

    NASA Astrophysics Data System (ADS)

    LeRoy, S.; Segur, P.; Teyssedre, G.; Laurent, C.

    2004-01-01

    We present a conduction model aimed at describing bipolar transport and space charge phenomena in low density polyethylene under dc stress. In the first part we recall the basic requirements for the description of charge transport and charge storage in disordered media with emphasis on the case of polyethylene. A quick review of available conduction models is presented and our approach is compared with these models. Then, the bases of the model are described and related assumptions are discussed. Finally, results on external current, trapped and free space charge distributions, field distribution and recombination rate are presented and discussed, considering a constant dc voltage, a step-increase of the voltage, and a polarization-depolarization protocol for the applied voltage. It is shown that the model is able to describe the general features reported for external current, electroluminescence and charge distribution in polyethylene.

  20. Thrust Force Analysis of Tripod Constant Velocity Joint Using Multibody Model

    NASA Astrophysics Data System (ADS)

    Sugiura, Hideki; Matsunaga, Tsugiharu; Mizutani, Yoshiteru; Ando, Yosei; Kashiwagi, Isashi

    A tripod constant velocity joint is used in the driveshaft of front wheel drive vehicles. Thrust force generated by this joint causes lateral vibration in these vehicles. To analyze the thrust force, a detailed model is constructed based on a multibody dynamics approach. This model includes all principal parts of the joint defined as rigid bodies and all force elements of contact and friction acting among these parts. This model utilizes a new contact modeling method of needle roller bearings for more precise and faster computation. By comparing computational and experimental results, the appropriateness of this model is verified and the principal factors inducing the second and third rotating order components of the thrust force are clarified. This paper also describes the influence of skewed needle rollers on the thrust force and evaluates the contribution of friction forces at each contact region to the thrust force.

  1. An original approach to elastic constants determination using a self-developed EMAT system

    NASA Astrophysics Data System (ADS)

    Jenot, Frédéric; Rivart, Frédéric; Camus, Liévin

    2018-04-01

    Electromagnetic Acoustic Transducers (EMATs) allow non-contact ultrasonic measurements in order to characterize structures for a wide range of applications. Considering non-ferromagnetic metal materials, excitation of elastic waves is due to Lorentz forces that result from an applied magnetic field and induced eddy currents in a near surface region of the sample. EMAT's design is based on a magnet structure associated with a coil leading to multiple configurations, which are able to excite bulk and guided acoustic waves. In this work, we first present a self-developed EMAT system composed of multiple emission and reception channels. In a second part, we propose an original method in order to determine the elastic constants of an isotropic material. To achieve this goal, Rayleigh and shear waves are used and the advantages of this method are clearly highlighted. The results obtained are then compared with conventional measurements achieved with piezoelectric transducers.

  2. Distribution of tsunami interevent times

    NASA Astrophysics Data System (ADS)

    Geist, Eric L.; Parsons, Tom

    2008-01-01

    The distribution of tsunami interevent times is analyzed using global and site-specific (Hilo, Hawaii) tsunami catalogs. An empirical probability density distribution is determined by binning the observed interevent times during a period in which the observation rate is approximately constant. The empirical distributions for both catalogs exhibit non-Poissonian behavior in which there is an abundance of short interevent times compared to an exponential distribution. Two types of statistical distributions are used to model this clustering behavior: (1) long-term clustering described by a universal scaling law, and (2) Omori law decay of aftershocks and triggered sources. The empirical and theoretical distributions all imply an increased hazard rate after a tsunami, followed by a gradual decrease with time approaching a constant hazard rate. Examination of tsunami sources suggests that many of the short interevent times are caused by triggered earthquakes, though the triggered events are not necessarily on the same fault.

  3. Ab initio study of MF2 (M=Mn, Fe, Co, Ni) rutile-type compounds using the periodic unrestricted Hartree-Fock approach

    NASA Astrophysics Data System (ADS)

    de P. R. Moreira, Ibério; Dovesi, Roberto; Roetti, Carla; Saunders, Victor R.; Orlando, Roberto

    2000-09-01

    The ab initio periodic unrestricted Hartree-Fock method has been applied in the investigation of the ground-state structural, electronic, and magnetic properties of the rutile-type compounds MF2 (M=Mn, Fe, Co, and Ni). All electron Gaussian basis sets have been used. The systems turn out to be large band-gap antiferromagnetic insulators; the optimized geometrical parameters are in good agreement with experiment. The calculated most stable electronic state shows an antiferromagnetic order in agreement with that resulting from neutron scattering experiments. The magnetic coupling constants between nearest-neighbor magnetic ions along the [001], [111], and [100] (or [010]) directions have been calculated using several supercells. The resulting ab initio magnetic coupling constants are reasonably satisfactory when compared with available experimental data. The importance of the Jahn-Teller effect in FeF2 and CoF2 is also discussed.

  4. Automatic exposure control systems designed to maintain constant image noise: effects on computed tomography dose and noise relative to clinically accepted technique charts.

    PubMed

    Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; Kofler, James M; McCollough, Cynthia H

    2015-01-01

    To compare computed tomography dose and noise arising from use of an automatic exposure control (AEC) system designed to maintain constant image noise as patient size varies with clinically accepted technique charts and AEC systems designed to vary image noise. A model was developed to describe tube current modulation as a function of patient thickness. Relative dose and noise values were calculated as patient width varied for AEC settings designed to yield constant or variable noise levels and were compared to empirically derived values used by our clinical practice. Phantom experiments were performed in which tube current was measured as a function of thickness using a constant-noise-based AEC system and the results were compared with clinical technique charts. For 12-, 20-, 28-, 44-, and 50-cm patient widths, the requirement of constant noise across patient size yielded relative doses of 5%, 14%, 38%, 260%, and 549% and relative noises of 435%, 267%, 163%, 61%, and 42%, respectively, as compared with our clinically used technique chart settings at each respective width. Experimental measurements showed that a constant noise-based AEC system yielded 175% relative noise for a 30-cm phantom and 206% relative dose for a 40-cm phantom compared with our clinical technique chart. Automatic exposure control systems that prescribe constant noise as patient size varies can yield excessive noise in small patients and excessive dose in obese patients compared with clinically accepted technique charts. Use of noise-level technique charts and tube current limits can mitigate these effects.

  5. Absolute Calibration of Si iRMs used for Measurements of Si Paleo-nutrient proxies

    NASA Astrophysics Data System (ADS)

    Vocke, R. D., Jr.; Rabb, S. A.

    2016-12-01

    Silicon isotope variations (reported as δ30Si and δ29Si, relative to NBS28) in silicic acid dissolved in ocean waters, in biogenic silica and in diatoms are extremely informative paleo-nutrient proxies. The resolution and comparability of such measurements depend on the quality of the isotopic Reference Materials (iRMs) defining the delta scale. We report new absolute Si isotopic measurements on the iRMs NBS28 (RM 8546 - Silica Sand), Diatomite, and Big Batch using the Avogadro measurement approach and comparing them with prior assessments of these iRMs. The Avogadro Si measurement technique was developed by the German Physikalish-Technische Bundesanstalt (PTB) to provide a precise and highly accurate method to measure absolute isotopic ratios in highly enriched 28Si (99.996%) material. These measurements are part of an international effort to redefine the kg and mole based on the Planck constant h and the Avogadro constant NA, respectively (Vocke et al., 2014 Metrologia 51, 361, Azuma et al., 2015 Metrologia 52 360). This approach produces absolute Si isotope ratio data with lower levels of uncertainty when compared to the traditional "Atomic Weights" method of absolute isotope ratio measurement calibration. This is illustrated in Fig. 1 where absolute Si isotopic measurements on SRM 990, separated by 40+ years of advances in instrumentation, are compared. The availability of this new technique does not say that absolute Si isotopic ratios are or ever will be better for normal Si isotopic measurements when seeking isotopic variations in nature, because they are not. However, by determining the absolute isotopic ratios of all the Si iRM scale artifacts, such iRMs become traceable to the metric system (SI); thereby automatically conferring on all the artifact-based δ30Si and δ29Si measurements traceability to the base SI unit, the mole. Such traceability should help reduce the potential of bias between different iRMs and facilitate the replacement of delta-scale artefacts when they run out. Fig. 1 Comparison of absolute isotopic measurements of SRM 990 using two radically different approaches to absolute calibration and mass bias corrections.

  6. Electron attachment to CF3 and CF3Br at temperatures up to 890 K: experimental test of the kinetic modeling approach.

    PubMed

    Shuman, Nicholas S; Miller, Thomas M; Viggiano, Albert A; Troe, Jürgen

    2013-05-28

    Thermal rate constants and product branching fractions for electron attachment to CF3Br and the CF3 radical have been measured over the temperature range 300-890 K, the upper limit being restricted by thermal decomposition of CF3Br. Both measurements were made in Flowing Afterglow Langmuir Probe apparatuses; the CF3Br measurement was made using standard techniques, and the CF3 measurement using the Variable Electron and Neutral Density Attachment Mass Spectrometry technique. Attachment to CF3Br proceeds exclusively by the dissociative channel yielding Br(-), with a rate constant increasing from 1.1 × 10(-8) cm(3) s(-1) at 300 K to 5.3 × 10(-8) cm(3) s(-1) at 890 K, somewhat lower than previous data at temperatures up to 777 K. CF3 attachment proceeds through competition between associative attachment yielding CF3 (-) and dissociative attachment yielding F(-). Prior data up to 600 K showed the rate constant monotonically increasing, with the partial rate constant of the dissociative channel following Arrhenius behavior; however, extrapolation of the data using a recently proposed kinetic modeling approach predicted the rate constant to turn over at higher temperatures, despite being only ~5% of the collision rate. The current data agree well with the previous kinetic modeling extrapolation, providing a demonstration of the predictive capabilities of the approach.

  7. Spatial Burnout in Water Reactors with Nonuniform Startup Distributions of Uranium and Boron

    NASA Technical Reports Server (NTRS)

    Fox, Thomas A.; Bogart, Donald

    1955-01-01

    Spatial burnout calculations have been made of two types of water moderated cylindrical reactor using boron as a burnable poison to increase reactor life. Specific reactors studied were a version of the Submarine Advanced Reactor (sAR) and a supercritical water reactor (SCW) . Burnout characteristics such as reactivity excursion, neutron-flux and heat-generation distributions, and uranium and boron distributions have been determined for core lives corresponding to a burnup of approximately 7 kilograms of fully enriched uranium. All reactivity calculations have been based on the actual nonuniform distribution of absorbers existing during intervals of core life. Spatial burnout of uranium and boron and spatial build-up of fission products and equilibrium xenon have been- considered. Calculations were performed on the NACA nuclear reactor simulator using two-group diff'usion theory. The following reactor burnout characteristics have been demonstrated: 1. A significantly lower excursion in reactivity during core life may be obtained by nonuniform rather than uniform startup distribution of uranium. Results for SCW with uranium distributed to provide constant radial heat generation and a core life corresponding to a uranium burnup of 7 kilograms indicated a maximum excursion in reactivity of 2.5 percent. This compared to a maximum excursion of 4.2 percent obtained for the same core life when w'anium was uniformly distributed at startup. Boron was incorporated uniformly in these cores at startup. 2. It is possible to approach constant radial heat generation during the life of a cylindrical core by means of startup nonuniform radial and axial distributions of uranium and boron. Results for SCW with nonuniform radial distribution of uranium to provide constant radial heat generation at startup and with boron for longevity indicate relatively small departures from the initially constant radial heat generation distribution during core life. Results for SAR with a sinusoidal distribution rather than uniform axial distributions of boron indicate significant improvements in axial heat generation distribution during the greater part of core life. 3. Uranium investments for cylindrical reactors with nonuniform radial uranium distributions which provide constant radial heat generation per unit core volume are somewhat higher than for reactors with uniform uranium concentration at startup. On the other hand, uranium investments for reactors with axial boron distributions which approach constant axial heat generation are somewhat smaller than for reactors with uniform boron distributions at startup.

  8. A new approach using coagulation rate constant for evaluation of turbidity removal

    NASA Astrophysics Data System (ADS)

    Al-Sameraiy, Mukheled

    2017-06-01

    Coagulation-flocculation-sedimentation processes for treating three levels of bentonite synthetic turbid water using date seeds (DS) and alum (A) coagulants were investigated in the previous research work. In the current research, the same experimental results were used to adopt a new approach on a basis of using coagulation rate constant as an investigating parameter to identify optimum doses of these coagulants. Moreover, the performance of these coagulants to meet (WHO) turbidity standard was assessed by introducing a new evaluating criterion in terms of critical coagulation rate constant (kc). Coagulation rate constants (k2) were mathematically calculated in second order form of coagulation process for each coagulant. The maximum (k2) values corresponded to doses, which were obviously to be considered as optimum doses. The proposed criterion to assess the performance of coagulation process of these coagulants was based on the mathematical representation of (WHO) turbidity guidelines in second order form of coagulation process stated that (k2) for each coagulant should be ≥ (kc) for each level of synthetic turbid water. For all tested turbid water, DS coagulant could not satisfy it. While, A coagulant could satisfy it. The results obtained in the present research are exactly in agreement with the previous published results in terms of finding optimum doses for each coagulant and assessing their performances. On the whole, it is recommended considering coagulation rate constant to be a new approach as an indicator for investigating optimum doses and critical coagulation rate constant to be a new evaluating criterion to assess coagulants' performance.

  9. An experimental approach to non - extensive statistical physics and Epidemic Type Aftershock Sequence (ETAS) modeling. The case of triaxially deformed sandstones using acoustic emissions.

    NASA Astrophysics Data System (ADS)

    Stavrianaki, K.; Vallianatos, F.; Sammonds, P. R.; Ross, G. J.

    2014-12-01

    Fracturing is the most prevalent deformation mechanism in rocks deformed in the laboratory under simulated upper crustal conditions. Fracturing produces acoustic emissions (AE) at the laboratory scale and earthquakes on a crustal scale. The AE technique provides a means to analyse microcracking activity inside the rock volume and since experiments can be performed under confining pressure to simulate depth of burial, AE can be used as a proxy for natural processes such as earthquakes. Experimental rock deformation provides us with several ways to investigate time-dependent brittle deformation. Two main types of experiments can be distinguished: (1) "constant strain rate" experiments in which stress varies as a result of deformation, and (2) "creep" experiments in which deformation and deformation rate vary over time as a result of an imposed constant stress. We conducted constant strain rate experiments on air-dried Darley Dale sandstone samples in a variety of confining pressures (30MPa, 50MPa, 80MPa) and in water saturated samples with 20 MPa initial pore fluid pressure. The results from these experiments used to determine the initial loading in the creep experiments. Non-extensive statistical physics approach was applied to the AE data in order to investigate the spatio-temporal pattern of cracks close to failure. A more detailed study was performed for the data from the creep experiments. When axial stress is plotted against time we obtain the trimodal creep curve. Calculation of Tsallis entropic index q is performed to each stage of the curve and the results are compared with the ones from the constant strain rate experiments. The Epidemic Type Aftershock Sequence model (ETAS) is also applied to each stage of the creep curve and the ETAS parameters are calculated. We investigate whether these parameters are constant across all stages of the curve, or whether there are interesting patterns of variation. This research has been co-funded by the European Union (European Social Fund) and Greek national resources under the framework of the "THALES Program: SEISMO FEAR HELLARC" project of the "Education & Lifelong Learning" Operational Programme.

  10. A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yifang; Lee, Chi-Guhn; Chan, Timothy C. Y., E-mail: tcychan@mie.utoronto.ca

    2014-02-15

    Purpose: To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. Methods: The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxelsmore » on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. Results: The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. Conclusions: By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.« less

  11. Surgery or conservative treatment for rotator cuff tear: a meta-analysis.

    PubMed

    Ryösä, Anssi; Laimi, Katri; Äärimaa, Ville; Lehtimäki, Kaisa; Kukkonen, Juha; Saltychev, Mikhail

    2017-07-01

    Comparative evidence on treating rotator cuff tear is inconclusive. The objective of this review was to evaluate the evidence on effectiveness of tendon repair in reducing pain and improving function of the shoulder when compared with conservative treatment of symptomatic rotator cuff tear. Search on CENTRAL, MEDLINE, EMBASE, CINAHL, Web of Science and Pedro databases. Randomised controlled trials (RCT) comparing surgery and conservative treatment of rotator cuff tear. Study selection and extraction based on the Cochrane Handbook for Systematic reviews of Interventions. Random effects meta-analysis. Three identified RCTs involved 252 participants (123 cases and 129 controls). The risk of bias was considered low for all three RCTs. For Constant score, statistically insignificant effect size was 5.6 (95% CI -0.41 to 11.62) points in 1-year follow up favouring surgery and below the level of minimal clinically important difference. The respective difference in pain reduction was -0.93 (95% CI -1.65 to -0.21) cm on a 0-10 pain visual analogue scale favouring surgery. The difference was statistically significant (p = 0.012) in 1-year follow up but below the level of minimal clinically important difference. There is limited evidence that surgery is not more effective in treating rotator cuff tear than conservative treatment alone. Thus, a conservative approach is advocated as the initial treatment modality. Implications for Rehabilitation There is limited evidence that surgery is not more effective in treating rotator cuff tear than conservative treatment alone. There was no clinically significant difference between surgery and active physiotherapy in 1-year follow-up in improving Constant score or reducing pain caused by rotator cuff tear. As physiotherapy is less proneness to complications and less expensive than surgery, a conservative approach is advocated as the initial treatment modality to rotator cuff tears.

  12. Nuclear-spin optical rotation in xenon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savukov, Igor Mykhaylovich

    We report that the nuclear-spin optical rotation (NSOR) effect, which has potential applications in correlated nuclear-spin-resonance optical spectroscopy, has previously been explored experimentally and theoretically in liquid Xe. Calculations of the Xe NSOR constant are very challenging because the result is sensitive to correlations, relativistic effects, and the choice of basis, with strong cancellation between contributions from lowest and remaining states. The relativistic configuration-interaction many-body-theory approach, presented here, is promising because this approach has been successful in predicting various properties of noble-gas atoms, such as energies, oscillator strengths (OSs), Verdet constants, and photoionization cross sections. However, correlations become stronger alongmore » the sequence of noble-gas atoms and the theoretical accuracy in Xe is not as high as, for example, in neon and argon. To improve the accuracy of the Xe Verdet and NSOR constants, which are calculated as explicit sums over the excited states, theoretical values for the several lowest levels are replaced with empirical values of energies, OSs, and hyperfine structure constants. We found that the Xe Verdet constant is in excellent agreement with accurate measurements. To take into account liquid effects, empirical data for energy shifts were also used to correct the NSOR constant. In conclusion, the resulting Xe NSOR constant is in a good agreement with experiment, although the liquid-state effect is treated quite approximately.« less

  13. Nuclear-spin optical rotation in xenon

    DOE PAGES

    Savukov, Igor Mykhaylovich

    2015-10-29

    We report that the nuclear-spin optical rotation (NSOR) effect, which has potential applications in correlated nuclear-spin-resonance optical spectroscopy, has previously been explored experimentally and theoretically in liquid Xe. Calculations of the Xe NSOR constant are very challenging because the result is sensitive to correlations, relativistic effects, and the choice of basis, with strong cancellation between contributions from lowest and remaining states. The relativistic configuration-interaction many-body-theory approach, presented here, is promising because this approach has been successful in predicting various properties of noble-gas atoms, such as energies, oscillator strengths (OSs), Verdet constants, and photoionization cross sections. However, correlations become stronger alongmore » the sequence of noble-gas atoms and the theoretical accuracy in Xe is not as high as, for example, in neon and argon. To improve the accuracy of the Xe Verdet and NSOR constants, which are calculated as explicit sums over the excited states, theoretical values for the several lowest levels are replaced with empirical values of energies, OSs, and hyperfine structure constants. We found that the Xe Verdet constant is in excellent agreement with accurate measurements. To take into account liquid effects, empirical data for energy shifts were also used to correct the NSOR constant. In conclusion, the resulting Xe NSOR constant is in a good agreement with experiment, although the liquid-state effect is treated quite approximately.« less

  14. Bromamine Decomposition Revisited: A Holistic Approach for Analyzing Acid and Base Catalysis Kinetics.

    PubMed

    Wahman, David G; Speitel, Gerald E; Katz, Lynn E

    2017-11-21

    Chloramine chemistry is complex, with a variety of reactions occurring in series and parallel and many that are acid or base catalyzed, resulting in numerous rate constants. Bromide presence increases system complexity even further with possible bromamine and bromochloramine formation. Therefore, techniques for parameter estimation must address this complexity through thoughtful experimental design and robust data analysis approaches. The current research outlines a rational basis for constrained data fitting using Brønsted theory, application of the microscopic reversibility principle to reversible acid or base catalyzed reactions, and characterization of the relative significance of parallel reactions using fictive product tracking. This holistic approach was used on a comprehensive and well-documented data set for bromamine decomposition, allowing new interpretations of existing data by revealing that a previously published reaction scheme was not robust; it was not able to describe monobromamine or dibromamine decay outside of the conditions for which it was calibrated. The current research's simplified model (3 reactions, 17 constants) represented the experimental data better than the previously published model (4 reactions, 28 constants). A final model evaluation was conducted based on representative drinking water conditions to determine a minimal model (3 reactions, 8 constants) applicable for drinking water conditions.

  15. A simple cosmology with a varying fine structure constant.

    PubMed

    Sandvik, Håvard Bunes; Barrow, John D; Magueijo, João

    2002-01-21

    We investigate the cosmological consequences of a theory in which the electric charge e can vary. In this theory the fine structure "constant," alpha, remains almost constant in the radiation era, undergoes a small increase in the matter era, but approaches a constant value when the universe starts accelerating because of a positive cosmological constant. This model satisfies geonuclear, nucleosynthesis, and cosmic microwave background constraints on time variation in alpha, while fitting the observed accelerating Universe and evidence for small alpha variations in quasar spectra. It also places specific restrictions on the nature of the dark matter. Further tests, involving stellar spectra and Eötvös experiments, are proposed.

  16. Quantification of Peptides from Immunoglobulin Constant and Variable Regions by Liquid Chromatography-Multiple Reaction Monitoring Mass Spectrometry for Assessment of Multiple Myeloma Patients

    PubMed Central

    Remily-Wood, Elizabeth R.; Benson, Kaaron; Baz, Rachid C.; Chen, Y. Ann; Hussein, Mohamad; Hartley-Brown, Monique A.; Sprung, Robert W.; Perez, Brianna; Liu, Richard Z.; Yoder, Sean; Teer, Jamie; Eschrich, Steven A.; Koomen, John M.

    2014-01-01

    Purpose Quantitative mass spectrometry assays for immunoglobulins (Igs) are compared with existing clinical methods in samples from patients with plasma cell dyscrasias, e.g. multiple myeloma. Experimental design Using LC-MS/MS data, Ig constant region peptides and transitions were selected for liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MRM). Quantitative assays were used to assess Igs in serum from 83 patients. Results LC-MRM assays quantify serum levels of Igs and their isoforms (IgG1–4, IgA1–2, IgM, IgD, and IgE, as well as kappa(κ) and lambda(λ) light chains). LC-MRM quantification has been applied to single samples from a patient cohort and a longitudinal study of an IgE patient undergoing treatment, to enable comparison with existing clinical methods. Proof-of-concept data for defining and monitoring variable region peptides are provided using the H929 multiple myeloma cell line and two MM patients. Conclusions and Clinical Relevance LC-MRM assays targeting constant region peptides determine the type and isoform of the involved immunoglobulin and quantify its expression; the LC-MRM approach has improved sensitivity compared with the current clinical method, but slightly higher interassay variability. Detection of variable region peptides is a promising way to improve Ig quantification, which could produce a dramatic increase in sensitivity over existing methods, and could further complement current clinical techniques. PMID:24723328

  17. Quantification of peptides from immunoglobulin constant and variable regions by LC-MRM MS for assessment of multiple myeloma patients.

    PubMed

    Remily-Wood, Elizabeth R; Benson, Kaaron; Baz, Rachid C; Chen, Y Ann; Hussein, Mohamad; Hartley-Brown, Monique A; Sprung, Robert W; Perez, Brianna; Liu, Richard Z; Yoder, Sean J; Teer, Jamie K; Eschrich, Steven A; Koomen, John M

    2014-10-01

    Quantitative MS assays for Igs are compared with existing clinical methods in samples from patients with plasma cell dyscrasias, for example, multiple myeloma (MM). Using LC-MS/MS data, Ig constant region peptides, and transitions were selected for LC-MRM MS. Quantitative assays were used to assess Igs in serum from 83 patients. RNA sequencing and peptide-based LC-MRM are used to define peptides for quantification of the disease-specific Ig. LC-MRM assays quantify serum levels of Igs and their isoforms (IgG1-4, IgA1-2, IgM, IgD, and IgE, as well as kappa (κ) and lambda (λ) light chains). LC-MRM quantification has been applied to single samples from a patient cohort and a longitudinal study of an IgE patient undergoing treatment, to enable comparison with existing clinical methods. Proof-of-concept data for defining and monitoring variable region peptides are provided using the H929 MM cell line and two MM patients. LC-MRM assays targeting constant region peptides determine the type and isoform of the involved Ig and quantify its expression; the LC-MRM approach has improved sensitivity compared with the current clinical method, but slightly higher inter-assay variability. Detection of variable region peptides is a promising way to improve Ig quantification, which could produce a dramatic increase in sensitivity over existing methods, and could further complement current clinical techniques. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Impact of density information on Rayleigh surface wave inversion results

    NASA Astrophysics Data System (ADS)

    Ivanov, Julian; Tsoflias, Georgios; Miller, Richard D.; Peterie, Shelby; Morton, Sarah; Xia, Jianghai

    2016-12-01

    We assessed the impact of density on the estimation of inverted shear-wave velocity (Vs) using the multi-channel analysis of surface waves (MASW) method. We considered the forward modeling theory, evaluated model sensitivity, and tested the effect of density information on the inversion of seismic data acquired in the Arctic. Theoretical review, numerical modeling and inversion of modeled and real data indicated that the density ratios between layers, not the actual density values, impact the determination of surface-wave phase velocities. Application on real data compared surface-wave inversion results using: a) constant density, the most common approach in practice, b) indirect density estimates derived from refraction compressional-wave velocity observations, and c) from direct density measurements in a borehole. The use of indirect density estimates reduced the final shear-wave velocity (Vs) results typically by 6-7% and the use of densities from a borehole reduced the final Vs estimates by 10-11% compared to those from assumed constant density. In addition to the improved absolute Vs accuracy, the resulting overall Vs changes were unevenly distributed laterally when viewed on a 2-D section leading to an overall Vs model structure that was more representative of the subsurface environment. It was observed that the use of constant density instead of increasing density with depth not only can lead to Vs overestimation but it can also create inaccurate model structures, such as a low-velocity layer. Thus, optimal Vs estimations can be best achieved using field estimates of subsurface density ratios.

  19. Transformations between Jordan and Einstein frames: Bounces, antigravity, and crossing singularities

    NASA Astrophysics Data System (ADS)

    Kamenshchik, Alexander Yu.; Pozdeeva, Ekaterina O.; Vernov, Sergey Yu.; Tronconi, Alessandro; Venturi, Giovanni

    2016-09-01

    We study the relation between the Jordan-Einstein frame transition and the possible description of the crossing of singularities in flat Friedmann universes, using the fact that the regular evolution in one frame can correspond to crossing singularities in the other frame. We show that some interesting effects arise in simple models such as one with a massless scalar field or another wherein the potential is constant in the Einstein frame. The dynamics in these models and in their conformally coupled counterparts are described in detail, and a method for the continuation of such cosmological evolutions beyond the singularity is developed. We compare our approach with some other, recently developed, approaches to the problem of the crossing of singularities.

  20. An input-to-state stability approach to verify almost global stability of a synchronous-machine-infinite-bus system.

    PubMed

    Schiffer, Johannes; Efimov, Denis; Ortega, Romeo; Barabanov, Nikita

    2017-08-13

    Conditions for almost global stability of an operating point of a realistic model of a synchronous generator with constant field current connected to an infinite bus are derived. The analysis is conducted by employing the recently proposed concept of input-to-state stability (ISS)-Leonov functions, which is an extension of the powerful cell structure principle developed by Leonov and Noldus to the ISS framework. Compared with the original ideas of Leonov and Noldus, the ISS-Leonov approach has the advantage of providing additional robustness guarantees. The efficiency of the derived sufficient conditions is illustrated via numerical experiments.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).

  1. The social determinants of substance abuse in African American baby boomers: effects of family, media images, and environment.

    PubMed

    Pope, Robert C; Wallhagen, Margaret; Davis, Harvey

    2010-07-01

    Grounded theory methodology was used to explore the social processes involved in the use of illicit drugs in older African Americans as an underpinning to the development of approaches to nursing care and treatment. Interviews were conducted with six older African American substance users who were currently in drug treatment programs. Responses to the questions were recorded, transcribed, and analyzed using constant comparative methods. Three core themes emerged: (a) family, (b) media images, and (c) environment. The core issues of substance abuse, such as the environment and larger societal forces, cannot be addressed by one discipline and mandate that clinicians move to an interdisciplinary approach to achieve a plan of care for this growing population.

  2. Temperature-Ramped 129Xe Spin-Exchange Optical Pumping

    PubMed Central

    2015-01-01

    We describe temperature-ramped spin-exchange optical pumping (TR-SEOP) in an automated high-throughput batch-mode 129Xe hyperpolarizer utilizing three key temperature regimes: (i) “hot”—where the 129Xe hyperpolarization rate is maximal, (ii) “warm”—where the 129Xe hyperpolarization approaches unity, and (iii) “cool”—where hyperpolarized 129Xe gas is transferred into a Tedlar bag with low Rb content (<5 ng per ∼1 L dose) suitable for human imaging applications. Unlike with the conventional approach of batch-mode SEOP, here all three temperature regimes may be operated under continuous high-power (170 W) laser irradiation, and hyperpolarized 129Xe gas is delivered without the need for a cryocollection step. The variable-temperature approach increased the SEOP rate by more than 2-fold compared to the constant-temperature polarization rate (e.g., giving effective values for the exponential buildup constant γSEOP of 62.5 ± 3.7 × 10–3 min–1 vs 29.9 ± 1.2 × 10–3 min–1) while achieving nearly the same maximum %PXe value (88.0 ± 0.8% vs 90.1% ± 0.8%, for a 500 Torr (67 kPa) Xe cell loading—corresponding to nuclear magnetic resonance/magnetic resonance imaging (NMR/MRI) enhancements of ∼3.1 × 105 and ∼2.32 × 108 at the relevant fields for clinical imaging and HP 129Xe production of 3 T and 4 mT, respectively); moreover, the intercycle “dead” time was also significantly decreased. The higher-throughput TR-SEOP approach can be implemented without sacrificing the level of 129Xe hyperpolarization or the experimental stability for automation—making this approach beneficial for improving the overall 129Xe production rate in clinical settings. PMID:25008290

  3. Determination of Henry’s Law Constants Using Internal Standards with Benchmark Values

    EPA Science Inventory

    It is shown that Henry’s law constants can be experimentally determined by comparing headspace content of compounds with known constants to interpolate the constants of other compounds. Studies were conducted over a range of water temperatures to identify temperature dependence....

  4. Ultra High Strain Rate Nanoindentation Testing.

    PubMed

    Sudharshan Phani, Pardhasaradhi; Oliver, Warren Carl

    2017-06-17

    Strain rate dependence of indentation hardness has been widely used to study time-dependent plasticity. However, the currently available techniques limit the range of strain rates that can be achieved during indentation testing. Recent advances in electronics have enabled nanomechanical measurements with very low noise levels (sub nanometer) at fast time constants (20 µs) and high data acquisition rates (100 KHz). These capabilities open the doors for a wide range of ultra-fast nanomechanical testing, for instance, indentation testing at very high strain rates. With an accurate dynamic model and an instrument with fast time constants, step load tests can be performed which enable access to indentation strain rates approaching ballistic levels (i.e., 4000 1/s). A novel indentation based testing technique involving a combination of step load and constant load and hold tests that enables measurement of strain rate dependence of hardness spanning over seven orders of magnitude in strain rate is presented. A simple analysis is used to calculate the equivalent uniaxial response from indentation data and compared to the conventional uniaxial data for commercial purity aluminum. Excellent agreement is found between the indentation and uniaxial data over several orders of magnitude of strain rate.

  5. Temperature effect on stress concentration around circular hole in a composite material specimen representative of X-29A forward-swept wing aircraft

    NASA Technical Reports Server (NTRS)

    Yeh, Hsien-Yang

    1988-01-01

    The theory of anisotropic elasticity was used to evaluate the anisotropic stress concentration factors of a composite laminated plate containing a small circular hole. This advanced composite was used to manufacture the X-29A forward-swept wing. It was found for composite material, that the anisotropic stress concentration is no longer a constant, and that the locations of maximum tangential stress points could shift by changing the fiber orientation with respect to the loading axis. The analysis showed that through the lamination process, the stress concentration factor could be reduced drastically, and therefore the structural performance could be improved. Both the mixture rule approach and the constant strain approach were used to calculate the stress concentration factor of room temperature. The results predicted by the mixture rule approach were about twenty percent deviate from the experimental data. However, the results predicted by the constant strain approach matched the testing data very well. This showed the importance of the inplane shear effect on the evaluation of the stress concentration factor for the X-29A composite plate.

  6. Prediction of Metabolite Concentrations, Rate Constants and Post-Translational Regulation Using Maximum Entropy-Based Simulations with Application to Central Metabolism of Neurospora crassa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, William; Zucker, Jeremy; Baxter, Douglas

    We report the application of a recently proposed approach for modeling biological systems using a maximum entropy production rate principle in lieu of having in vivo rate constants. The method is applied in four steps: (1) a new ODE-based optimization approach based on Marcelin’s 1910 mass action equation is used to obtain the maximum entropy distribution, (2) the predicted metabolite concentrations are compared to those generally expected from experiment using a loss function from which post-translational regulation of enzymes is inferred, (3) the system is re-optimized with the inferred regulation from which rate constants are determined from the metabolite concentrationsmore » and reaction fluxes, and finally (4) a full ODE-based, mass action simulation with rate parameters and allosteric regulation is obtained. From the last step, the power characteristics and resistance of each reaction can be determined. The method is applied to the central metabolism of Neurospora crassa and the flow of material through the three competing pathways of upper glycolysis, the non-oxidative pentose phosphate pathway, and the oxidative pentose phosphate pathway are evaluated as a function of the NADP/NADPH ratio. It is predicted that regulation of phosphofructokinase (PFK) and flow through the pentose phosphate pathway are essential for preventing an extreme level of fructose 1, 6-bisphophate accumulation. Such an extreme level of fructose 1,6-bisphophate would otherwise result in a glassy cytoplasm with limited diffusion, dramatically decreasing the entropy and energy production rate and, consequently, biological competitiveness.« less

  7. Combining coordination of motion actuators with driver steering interaction.

    PubMed

    Tagesson, Kristoffer; Laine, Leo; Jacobson, Bengt

    2015-01-01

    A new method is suggested for coordination of vehicle motion actuators; where driver feedback and capabilities become natural elements in the prioritization. The method is using a weighted least squares control allocation formulation, where driver characteristics can be added as virtual force constraints. The approach is in particular suitable for heavy commercial vehicles that in general are over actuated. The method is applied, in a specific use case, by running a simulation of a truck applying automatic braking on a split friction surface. Here the required driver steering angle, to maintain the intended direction, is limited by a constant threshold. This constant is automatically accounted for when balancing actuator usage in the method. Simulation results show that the actual required driver steering angle can be expected to match the set constant well. Furthermore, the stopping distance is very much affected by this set capability of the driver to handle the lateral disturbance, as expected. In general the capability of the driver to handle disturbances should be estimated in real-time, considering driver mental state. By using the method it will then be possible to estimate e.g. stopping distance implied from this. The setup has the potential of even shortening the stopping distance, when the driver is estimated as active, this compared to currently available systems. The approach is feasible for real-time applications and requires only measurable vehicle quantities for parameterization. Examples of other suitable applications in scope of the method would be electronic stability control, lateral stability control at launch and optimal cornering arbitration.

  8. Radiation dose calculations for CT scans with tube current modulation using the approach to equilibrium function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xinhua; Zhang, Da; Liu, Bob, E-mail: bliu7@mgh.harvard.edu

    2014-11-01

    Purpose: The approach to equilibrium function has been used previously to calculate the radiation dose to a shift-invariant medium undergoing CT scans with constant tube current [Li, Zhang, and Liu, Med. Phys. 39, 5347–5352 (2012)]. The authors have adapted this method to CT scans with tube current modulation (TCM). Methods: For a scan with variable tube current, the scan range was divided into multiple subscan ranges, each with a nearly constant tube current. Then the dose calculation algorithm presented previously was applied. For a clinical CT scan series that presented tube current per slice, the authors adopted an efficient approachmore » that computed the longitudinal dose distribution for one scan length equal to the slice thickness, which center was at z = 0. The cumulative dose at a specific point was a summation of the contributions from all slices and the overscan. Results: The dose calculations performed for a total of four constant and variable tube current distributions agreed with the published results of Dixon and Boone [Med. Phys. 40, 111920 (14pp.) (2013)]. For an abdomen/pelvis scan of an anthropomorphic phantom (model ATOM 701-B, CIRS, Inc., VA) on a GE Lightspeed Pro 16 scanner with 120 kV, N × T = 20 mm, pitch = 1.375, z axis current modulation (auto mA), and angular current modulation (smart mA), dose measurements were performed using two lines of optically stimulated luminescence dosimeters, one of which was placed near the phantom center and the other on the surface. Dose calculations were performed on the central and peripheral axes of a cylinder containing water, whose cross-sectional mass was about equal to that of the ATOM phantom in its abdominal region, and the results agreed with the measurements within 28.4%. Conclusions: The described method provides an effective approach that takes into account subject size, scan length, and constant or variable tube current to evaluate CT dose to a shift-invariant medium. For a clinical CT scan, dose calculations may be performed with a water-containing cylinder whose cross-sectional mass is equal to that of the subject. This method has the potential to substantially improve evaluations of patient dose from clinical CT scans, compared to CTDI{sub vol}, size-specific dose estimate (SSDE), or the dose evaluated for a TCM scan with a constant tube current equal to the average tube current of the TCM scan.« less

  9. Automatic Exposure Control Systems Designed to Maintain Constant Image Noise: Effects on Computed Tomography Dose and Noise Relative to Clinically Accepted Technique Charts

    PubMed Central

    Favazza, Christopher P.; Yu, Lifeng; Leng, Shuai; Kofler, James M.; McCollough, Cynthia H.

    2015-01-01

    Objective To compare computed tomography dose and noise arising from use of an automatic exposure control (AEC) system designed to maintain constant image noise as patient size varies with clinically accepted technique charts and AEC systems designed to vary image noise. Materials and Methods A model was developed to describe tube current modulation as a function of patient thickness. Relative dose and noise values were calculated as patient width varied for AEC settings designed to yield constant or variable noise levels and were compared to empirically derived values used by our clinical practice. Phantom experiments were performed in which tube current was measured as a function of thickness using a constant-noise-based AEC system and the results were compared with clinical technique charts. Results For 12-, 20-, 28-, 44-, and 50-cm patient widths, the requirement of constant noise across patient size yielded relative doses of 5%, 14%, 38%, 260%, and 549% and relative noises of 435%, 267%, 163%, 61%, and 42%, respectively, as compared with our clinically used technique chart settings at each respective width. Experimental measurements showed that a constant noise–based AEC system yielded 175% relative noise for a 30-cm phantom and 206% relative dose for a 40-cm phantom compared with our clinical technique chart. Conclusions Automatic exposure control systems that prescribe constant noise as patient size varies can yield excessive noise in small patients and excessive dose in obese patients compared with clinically accepted technique charts. Use of noise-level technique charts and tube current limits can mitigate these effects. PMID:25938214

  10. Assessment of two theoretical methods to estimate potentiometrictitration curves of peptides: comparison with experiment

    PubMed Central

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A.

    2008-01-01

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric-titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH) and dimethylsulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the Electrostatically Driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, MD simulations are run with the AMBER force field and the Generalized-Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethyl amine and propyl amine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach, and the titration curve in water calculated using the MD-based approach, have smooth shapes characteristic of the titration of weak multifunctional acids with small differences between the dissociation constants. Nevertheless, quantitative agreement between theoretically predicted and experimental titration curves is not achieved in all three solvents even with the MD-based approach which is manifested by a smaller pH range of the calculated titration curves with respect to the experimental curves. The poorer agreement obtained for water than for the non-aqueous solvents suggests a significant role of specific solvation in water, which cannot be accounted for by the mean-field solvation models. PMID:16509748

  11. Assessment of two theoretical methods to estimate potentiometric titration curves of peptides: comparison with experiment.

    PubMed

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A

    2006-03-09

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH), and dimethyl sulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the electrostatically driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, molecular dynamics (MD) simulations are run with the AMBER force field and the generalized Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethylamine and propylamine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach and the titration curve in water calculated using the MD-based approach have smooth shapes characteristic of the titration of weak multifunctional acids with small differences between the dissociation constants. Nevertheless, quantitative agreement between theoretically predicted and experimental titration curves is not achieved in all three solvents even with the MD-based approach, which is manifested by a smaller pH range of the calculated titration curves with respect to the experimental curves. The poorer agreement obtained for water than for the nonaqueous solvents suggests a significant role of specific solvation in water, which cannot be accounted for by the mean-field solvation models.

  12. Motional timescale predictions by molecular dynamics simulations: case study using proline and hydroxyproline sidechain dynamics.

    PubMed

    Aliev, Abil E; Kulke, Martin; Khaneja, Harmeet S; Chudasama, Vijay; Sheppard, Tom D; Lanigan, Rachel M

    2014-02-01

    We propose a new approach for force field optimizations which aims at reproducing dynamics characteristics using biomolecular MD simulations, in addition to improved prediction of motionally averaged structural properties available from experiment. As the source of experimental data for dynamics fittings, we use (13) C NMR spin-lattice relaxation times T1 of backbone and sidechain carbons, which allow to determine correlation times of both overall molecular and intramolecular motions. For structural fittings, we use motionally averaged experimental values of NMR J couplings. The proline residue and its derivative 4-hydroxyproline with relatively simple cyclic structure and sidechain dynamics were chosen for the assessment of the new approach in this work. Initially, grid search and simplexed MD simulations identified large number of parameter sets which fit equally well experimental J couplings. Using the Arrhenius-type relationship between the force constant and the correlation time, the available MD data for a series of parameter sets were analyzed to predict the value of the force constant that best reproduces experimental timescale of the sidechain dynamics. Verification of the new force-field (termed as AMBER99SB-ILDNP) against NMR J couplings and correlation times showed consistent and significant improvements compared to the original force field in reproducing both structural and dynamics properties. The results suggest that matching experimental timescales of motions together with motionally averaged characteristics is the valid approach for force field parameter optimization. Such a comprehensive approach is not restricted to cyclic residues and can be extended to other amino acid residues, as well as to the backbone. Copyright © 2013 Wiley Periodicals, Inc.

  13. Development of a self-consistent free-form approach for studying the three-dimensional morphology of a thin film

    NASA Astrophysics Data System (ADS)

    Kozhevnikov, Igor V.; Peverini, Luca; Ziegler, Eric

    2012-03-01

    A method capable of extracting the depth distribution of the dielectric constant of a thin film deposited on a substrate and the three power spectral density (PSD) functions characterizing its roughness is presented. It is based on the concurrent analysis of x-ray reflectivity and scattering measurements obtained at different glancing angle values of the probe beam so that the effect of roughness is taken into account during reconstruction of the dielectric constant profile. Likewise, the latter is taken into account when determining the PSD functions describing the film roughness. This approach is using a numerical computation iterative procedure that demonstrated a rapid convergence for the overall set of data leading to a precise description of the three-dimensional morphology of a film. In the case of a tungsten thin film deposited by dc-magnetron sputtering onto a silicon substrate and characterized under vacuum, the analysis of the x-ray data showed the tungsten density to vary with depth from 95% of the bulk density at the top of the film to about 80% near the substrate, where the presence of an interlayer, estimated to be 0.7 nm thick, was evidenced. The latter may be due to diffusion and/or implantation of tungsten atoms into the silicon substrate. In the reconstruction of the depth profile, the resolution (minimum feature size correctly reconstructed) was estimated to be of the order of 0.4-0.5 nm. The depth distribution of the dielectric constant was shown to affect the roughness conformity coefficient extracted from the measured x-ray scattering distributions, while the deposition process increased the film roughness at high spatial frequency as compared to the virgin substrate. On the contrary, the roughness showed a weak influence on the dielectric constant depth profile extracted, as the sample used in our particular experiment was extremely smooth.

  14. Kinetic Analysis for the Multistep Profiles of Organic Reactions: Significance of the Conformational Entropy on the Rate Constants of the Claisen Rearrangement.

    PubMed

    Sumiya, Yosuke; Nagahata, Yutaka; Komatsuzaki, Tamiki; Taketsugu, Tetsuya; Maeda, Satoshi

    2015-12-03

    The significance of kinetic analysis as a tool for understanding the reactivity and selectivity of organic reactions has recently been recognized. However, conventional simulation approaches that solve rate equations numerically are not amenable to multistep reaction profiles consisting of fast and slow elementary steps. Herein, we present an efficient and robust approach for evaluating the overall rate constants of multistep reactions via the recursive contraction of the rate equations to give the overall rate constants for the products and byproducts. This new method was applied to the Claisen rearrangement of allyl vinyl ether, as well as a substituted allyl vinyl ether. Notably, the profiles of these reactions contained 23 and 84 local minima, and 66 and 278 transition states, respectively. The overall rate constant for the Claisen rearrangement of allyl vinyl ether was consistent with the experimental value. The selectivity of the Claisen rearrangement reaction has also been assessed using a substituted allyl vinyl ether. The results of this study showed that the conformational entropy in these flexible chain molecules had a substantial impact on the overall rate constants. This new method could therefore be used to estimate the overall rate constants of various other organic reactions involving flexible molecules.

  15. Vitamin D dose response is underestimated by Endocrine Society's Clinical Practice Guideline.

    PubMed

    McKenna, Malachi J; Murray, Barbara F

    2013-06-01

    The recommended daily intakes of vitamin D according to the recent Clinical Practice Guideline (CPG) of the Endocrine Society are three- to fivefold higher than the Institute of Medicine (IOM) report. We speculated that these differences could be explained by different mathematical approaches to the vitamin D dose response. Studies were selected if the daily dose was ≤2000 IU/day, the duration exceeded 3 months, and 25-hydroxyvitamin D (25OHD) concentrations were measured at baseline and post-therapy. The rate constant was estimated according to the CPG approach. The achieved 25OHD result was estimated according to the following: i) the regression equation approach of the IOM; ii) the regression approach of the Vitamin D Supplementation in Older Subjects (ViDOS) study; and iii) the CPG approach using a rate constant of 2.5 (CPG2.5) and a rate constant of 5.0 (CPG5.0). The difference between the expected and the observed 25OHD result was expressed as a percentage of observed and analyzed for significance against a value of 0% for the four groups. Forty-one studies were analyzed. The mean (95% CI) rate constant was 5.3 (4.4-6.2) nmol/l per 100 IU per day, on average twofold higher than the CPG rate constant. The mean (95% CI) for the difference between the expected and observed expressed as a percentage of observed was as follows: i) IOM, -7 (-16,+2)% (t=1.64, P=0.110); ii) ViDOS, +2 (-8,+12)% (t=0.40, P=0.69); iii) CPG2.5, -21 (-27,-15)% (t=7.2, P<0.0001); and iv) CPG5.0+3 (-4,+10)% (t=0.91, P=0.366). The CPG 'rule of thumb' should be doubled to 5.0 nmol/l (2.0 ng/ml) per 100 IU per day, adopting a more risk-averse position.

  16. Comparing Teacher-Directed and Computer-Assisted Constant Time Delay for Teaching Functional Sight Words to Students with Moderate Intellectual Disability

    ERIC Educational Resources Information Center

    Coleman, Mari Beth; Hurley, Kevin J.; Cihak, David F.

    2012-01-01

    The purpose of this study was to compare the effectiveness and efficiency of teacher-directed and computer-assisted constant time delay strategies for teaching three students with moderate intellectual disability to read functional sight words. Target words were those found in recipes and were taught via teacher-delivered constant time delay or…

  17. Examination of two methods of describing the thermodynamic properties of oxygen near the critical point

    NASA Technical Reports Server (NTRS)

    Rees, T. H.; Suttles, J. T.

    1972-01-01

    A computer study was conducted to compare the numerical behavior of two approaches to describing the thermodynamic properties of oxygen near the critical point. Data on the relative differences between values of specific heats at constant pressure (sub p) density, and isotherm and isochor derivatives of the equation of state are presented for selected supercritical pressures at temperatures in the range 100 to 300 K. The results of a more detailed study of the sub p representations afforded by the two methods are also presented.

  18. Heavy quarkonium in a holographic basis

    DOE PAGES

    Li, Yang; Maris, Pieter; Zhao, Xingbo; ...

    2016-05-04

    Here, we study the heavy quarkonium within the basis light-front quantization approach. We implement the one-gluon exchange interaction and a confining potential inspired by light-front holography. We adopt the holographic light-front wavefunction (LFWF) as our basis function and solve the non-perturbative dynamics by diagonalizing the Hamiltonian matrix. We obtain the mass spectrum for charmonium and bottomonium. With the obtained LFWFs, we also compute the decay constants and the charge form factors for selected eigenstates. The results are compared with the experimental measurements and with other established methods.

  19. The Stability and Interfacial Motion of Multi-layer Radial Porous Media and Hele-Shaw Flows

    NASA Astrophysics Data System (ADS)

    Gin, Craig; Daripa, Prabir

    2017-11-01

    In this talk, we will discuss viscous fingering instabilities of multi-layer immiscible porous media flows within the Hele-Shaw model in a radial flow geometry. We study the motion of the interfaces for flows with both constant and variable viscosity fluids. We consider the effects of using a variable injection rate on multi-layer flows. We also present a numerical approach to simulating the interface motion within linear theory using the method of eigenfunction expansion. We compare these results with fully non-linear simulations.

  20. Modeling of chemical reactions in micelle: water-mediated keto-enol interconversion as a case study.

    PubMed

    Marracino, Paolo; Amadei, Andrea; Apollonio, Francesca; d'Inzeo, Guglielmo; Liberti, Micaela; di Crescenzo, Antonello; Fontana, Antonella; Zappacosta, Romina; Aschi, Massimiliano

    2011-06-30

    The effect of a zwitterionic micelle environment on the efficiency of the keto-enol interconversion of 2-phenylacetylthiophene has been investigated by means of a joint application of experimental and theoretical/computational approaches. Results have revealed a reduction of the reaction rate constant if compared with bulk water essentially because of the different solvation conditions experienced by the reactant species, including water molecules, in the micelle environment. The slight inhibiting effect due to the application of a static electric field has also been theoretically investigated and presented.

  1. On the semiclassical treatment of hot nuclear systems

    NASA Astrophysics Data System (ADS)

    Bartel, J.; Brack, M.; Guet, C.; Håkansson, H.-B.

    1984-05-01

    We discuss two different semiclassical approaches for calculating properties of hot nuclei and compare them to Hartree-Fock calculations using the same effective interaction. Good agreement is found for the entropy and the root-mean square radii as functions of the excitation energy. For a realistic Skyrme force we evaluate the temperature dependence of the free surface, curvature and constant energy coefficients of the liquid drop model, considering a plane interface of condensed symmetric nuclear matter in thermodynamical equilibrium with a nucleon gas. Present address: ASEA-PFBC AB, S-61220 Finspong, Sweden.

  2. Rate constants for proteins binding to substrates with multiple binding sites using a generalized forward flux sampling expression

    NASA Astrophysics Data System (ADS)

    Vijaykumar, Adithya; ten Wolde, Pieter Rein; Bolhuis, Peter G.

    2018-03-01

    To predict the response of a biochemical system, knowledge of the intrinsic and effective rate constants of proteins is crucial. The experimentally accessible effective rate constant for association can be decomposed in a diffusion-limited rate at which proteins come into contact and an intrinsic association rate at which the proteins in contact truly bind. Reversely, when dissociating, bound proteins first separate into a contact pair with an intrinsic dissociation rate, before moving away by diffusion. While microscopic expressions exist that enable the calculation of the intrinsic and effective rate constants by conducting a single rare event simulation of the protein dissociation reaction, these expressions are only valid when the substrate has just one binding site. If the substrate has multiple binding sites, a bound enzyme can, besides dissociating into the bulk, also hop to another binding site. Calculating transition rate constants between multiple states with forward flux sampling requires a generalized rate expression. We present this expression here and use it to derive explicit expressions for all intrinsic and effective rate constants involving binding to multiple states, including rebinding. We illustrate our approach by computing the intrinsic and effective association, dissociation, and hopping rate constants for a system in which a patchy particle model enzyme binds to a substrate with two binding sites. We find that these rate constants increase as a function of the rotational diffusion constant of the particles. The hopping rate constant decreases as a function of the distance between the binding sites. Finally, we find that blocking one of the binding sites enhances both association and dissociation rate constants. Our approach and results are important for understanding and modeling association reactions in enzyme-substrate systems and other patchy particle systems and open the way for large multiscale simulations of such systems.

  3. In vitro to in vivo extrapolation of biotransformation rates for assessing bioaccumulation of hydrophobic organic chemicals in mammals.

    PubMed

    Lee, Yung-Shan; Lo, Justin C; Otton, S Victoria; Moore, Margo M; Kennedy, Chris J; Gobas, Frank A P C

    2017-07-01

    Incorporating biotransformation in bioaccumulation assessments of hydrophobic chemicals in both aquatic and terrestrial organisms in a simple, rapid, and cost-effective manner is urgently needed to improve bioaccumulation assessments of potentially bioaccumulative substances. One approach to estimate whole-animal biotransformation rate constants is to combine in vitro measurements of hepatic biotransformation kinetics with in vitro to in vivo extrapolation (IVIVE) and bioaccumulation modeling. An established IVIVE modeling approach exists for pharmaceuticals (referred to in the present study as IVIVE-Ph) and has recently been adapted for chemical bioaccumulation assessments in fish. The present study proposes and tests an alternative IVIVE-B technique to support bioaccumulation assessment of hydrophobic chemicals with a log octanol-water partition coefficient (K OW ) ≥ 4 in mammals. The IVIVE-B approach requires fewer physiological and physiochemical parameters than the IVIVE-Ph approach and does not involve interconversions between clearance and rate constants in the extrapolation. Using in vitro depletion rates, the results show that the IVIVE-B and IVIVE-Ph models yield similar estimates of rat whole-organism biotransformation rate constants for hypothetical chemicals with log K OW  ≥ 4. The IVIVE-B approach generated in vivo biotransformation rate constants and biomagnification factors (BMFs) for benzo[a]pyrene that are within the range of empirical observations. The proposed IVIVE-B technique may be a useful tool for assessing BMFs of hydrophobic organic chemicals in mammals. Environ Toxicol Chem 2017;36:1934-1946. © 2016 SETAC. © 2016 SETAC.

  4. A hybrid-stress finite element approach for stress and vibration analysis in linear anisotropic elasticity

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley; Fly, Gerald W.; Mahadevan, L.

    1987-01-01

    A hybrid stress finite element method is developed for accurate stress and vibration analysis of problems in linear anisotropic elasticity. A modified form of the Hellinger-Reissner principle is formulated for dynamic analysis and an algorithm for the determination of the anisotropic elastic and compliance constants from experimental data is developed. These schemes were implemented in a finite element program for static and dynamic analysis of linear anisotropic two dimensional elasticity problems. Specific numerical examples are considered to verify the accuracy of the hybrid stress approach and compare it with that of the standard displacement method, especially for highly anisotropic materials. It is that the hybrid stress approach gives much better results than the displacement method. Preliminary work on extensions of this method to three dimensional elasticity is discussed, and the stress shape functions necessary for this extension are included.

  5. All-atom calculation of protein free-energy profiles

    NASA Astrophysics Data System (ADS)

    Orioli, S.; Ianeselli, A.; Spagnolli, G.; Faccioli, P.

    2017-10-01

    The Bias Functional (BF) approach is a variational method which enables one to efficiently generate ensembles of reactive trajectories for complex biomolecular transitions, using ordinary computer clusters. For example, this scheme was applied to simulate in atomistic detail the folding of proteins consisting of several hundreds of amino acids and with experimental folding time of several minutes. A drawback of the BF approach is that it produces trajectories which do not satisfy microscopic reversibility. Consequently, this method cannot be used to directly compute equilibrium observables, such as free energy landscapes or equilibrium constants. In this work, we develop a statistical analysis which permits us to compute the potential of mean-force (PMF) along an arbitrary collective coordinate, by exploiting the information contained in the reactive trajectories calculated with the BF approach. We assess the accuracy and computational efficiency of this scheme by comparing its results with the PMF obtained for a small protein by means of plain molecular dynamics.

  6. Filter design for cancellation of baseline-fluctuation in needle EMG recordings.

    PubMed

    Rodríguez-Carreño, I; Malanda-Trigueros, A; Gila-Useros, L; Navallas-Irujo, J; Rodríguez-Falces, J

    2006-01-01

    Appropriate cancellation of the baseline fluctuation (BLF) is an important issue when recording EMG signals as it may degrade signal quality and distort qualitative and quantitative analysis. We present a novel filter-design approach for automatic cancellation of the BLF based on several signal processing techniques used sequentially. The methodology is to estimate the spectral content of the BLF, and then to use this estimation to design a high-pass FIR filter that cancel the BLF present in the signal. Two merit figures are devised for measuring the degree of BLF present in an EMG record. These figures are used to compare our method with the conventional approach, which naively considers the baseline course to be of constant (without any fluctuation) potential shift. Applications of the technique on real and simulated EMG signals show the superior performance of our approach in terms of both visual inspection and the merit figures.

  7. Concepts, challenges, and successes in modeling thermodynamics of metabolism.

    PubMed

    Cannon, William R

    2014-01-01

    The modeling of the chemical reactions involved in metabolism is a daunting task. Ideally, the modeling of metabolism would use kinetic simulations, but these simulations require knowledge of the thousands of rate constants involved in the reactions. The measurement of rate constants is very labor intensive, and hence rate constants for most enzymatic reactions are not available. Consequently, constraint-based flux modeling has been the method of choice because it does not require the use of the rate constants of the law of mass action. However, this convenience also limits the predictive power of constraint-based approaches in that the law of mass action is used only as a constraint, making it difficult to predict metabolite levels or energy requirements of pathways. An alternative to both of these approaches is to model metabolism using simulations of states rather than simulations of reactions, in which the state is defined as the set of all metabolite counts or concentrations. While kinetic simulations model reactions based on the likelihood of the reaction derived from the law of mass action, states are modeled based on likelihood ratios of mass action. Both approaches provide information on the energy requirements of metabolic reactions and pathways. However, modeling states rather than reactions has the advantage that the parameters needed to model states (chemical potentials) are much easier to determine than the parameters needed to model reactions (rate constants). Herein, we discuss recent results, assumptions, and issues in using simulations of state to model metabolism.

  8. A comparative study of nano-SiO2 and nano-TiO2 fillers on proton conductivity and dielectric response of a silicotungstic acid-H3PO4-poly(vinyl alcohol) polymer electrolyte.

    PubMed

    Gao, Han; Lian, Keryn

    2014-01-08

    The effects of nano-SiO2 and nano-TiO2 fillers on a thin film silicotungstic acid (SiWA)-H3PO4-poly(vinyl alcohol) (PVA) proton conducting polymer electrolyte were studied and compared with respect to their proton conductivity, environmental stability, and dielectric properties, across a temperature range from 243 to 323 K. Three major effects of these fillers have been identified: (a) barrier effect; (b) intrinsic dielectric constant effect; and (c) water retention effect. Dielectric analyses were used to differentiate these effects on polymer electrolyte-enabled capacitors. Capacitor performance was correlated to electrolyte properties through dielectric constant and dielectric loss spectra. Using a single-ion approach, proton density and proton mobility of each polymer electrolyte were derived as a function of temperature. The results allow us to deconvolute the different contributions to proton conductivity in SiWA-H3PO4-PVA-based electrolytes, especially in terms of the effects of fillers on the dynamic equilibrium of free protons and protonated water in the electrolytes.

  9. Communication, Respect, and Leadership: Interprofessional Collaboration in Hospitals of Rural Ontario.

    PubMed

    Morris, Diane; Matthews, June

    2014-12-01

    Health care professionals are expected to work collaboratively across diverse settings. In rural hospitals, these professionals face different challenges from their urban colleagues; however, little is known about interprofessional practice in these settings. Eleven health care professionals from 2 rural interprofessional teams were interviewed about collaborative practice. The data were analyzed using a constant comparative method. Common themes included communication, respect, leadership, benefits of interprofessional teams, and the assets and challenges of working in small or rural hospitals. Differences between the cases were apparent in how the members conceptualized their teams, models of which were then compared with an "Ideal Interprofessional Team". These results suggest that many experienced health care professionals function well in interprofessional teams; yet, they did not likely receive much education about interprofessional practice in their training. Providing interprofessional education to new practitioners may help them to establish this approach early in their careers and build on it with additional experience. Finally, these findings can be applied to address concerns that have arisen from other reports by exploring innovative ways to attract health professionals to communities in rural, remote, and northern areas, as there is a constant need for dietitians and other health care professionals in these practice settings.

  10. Nuclear binding energy using semi empirical mass formula

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ankita,, E-mail: ankitagoyal@gmail.com; Suthar, B.

    2016-05-06

    In the present communication, semi empirical mass formula using the liquid drop model has been presented. Nuclear binding energies are calculated using semi empirical mass formula with various constants given by different researchers. We also compare these calculated values with experimental data and comparative study for finding suitable constants is added using the error plot. The study is extended to find the more suitable constant to reduce the error.

  11. Single-shot characterization of enzymatic reaction constants Km and kcat by an acoustic-driven, bubble-based fast micromixer.

    PubMed

    Xie, Yuliang; Ahmed, Daniel; Lapsley, Michael Ian; Lin, Sz-Chin Steven; Nawaz, Ahmad Ahsan; Wang, Lin; Huang, Tony Jun

    2012-09-04

    In this work we present an acoustofluidic approach for rapid, single-shot characterization of enzymatic reaction constants K(m) and k(cat). The acoustofluidic design involves a bubble anchored in a horseshoe structure which can be stimulated by a piezoelectric transducer to generate vortices in the fluid. The enzyme and substrate can thus be mixed rapidly, within 100 ms, by the vortices to yield the product. Enzymatic reaction constants K(m) and k(cat) can then be obtained from the reaction rate curves for different concentrations of substrate while holding the enzyme concentration constant. We studied the enzymatic reaction for β-galactosidase and its substrate (resorufin-β-D-galactopyranoside) and found K(m) and k(cat) to be 333 ± 130 μM and 64 ± 8 s(-1), respectively, which are in agreement with published data. Our approach is valuable for studying the kinetics of high-speed enzymatic reactions and other chemical reactions.

  12. Parameter-free determination of the exchange constant in thin films using magnonic patterning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langer, M.; Wagner, K.; Fassbender, J.

    2016-03-07

    An all-electrical method is presented to determine the exchange constant of magnetic thin films using ferromagnetic resonance. For films of 20 nm thickness and below, the determination of the exchange constant A, a fundamental magnetic quantity, is anything but straightforward. Among others, the most common methods are based on the characterization of perpendicular standing spin-waves. These approaches are however challenging, due to (i) very high energies and (ii) rather small intensities in this thickness regime. In the presented approach, surface patterning is applied to a permalloy (Ni{sub 80}Fe{sub 20}) film and a Co{sub 2}Fe{sub 0.4}Mn{sub 0.6}Si Heusler compound. Acting as amore » magnonic crystal, such structures enable the coupling of backward volume spin-waves to the uniform mode. Subsequent ferromagnetic resonance measurements give access to the spin-wave spectra free of unquantifiable parameters and, thus, to the exchange constant A with high accuracy.« less

  13. Exact combinatorial approach to finite coagulating systems

    NASA Astrophysics Data System (ADS)

    Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr

    2018-02-01

    This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.

  14. Rethink, Reform, Reenter: An Entrepreneurial Approach to Prison Programming.

    PubMed

    Keena, Linda; Simmons, Chris

    2015-07-01

    The purpose of this article was to present a description and first-stage evaluation of the impact of the Ice House Entrepreneurship Program on the learning experience of participating prerelease inmates at a Mississippi maximum-security prison and their perception of the transfer of skills learned in program into securing employment upon reentry. The Ice House Entrepreneurship Program is a 12-week program facilitated by volunteer university professors to inmates in a prerelease unit of a maximum-security prison in Mississippi. Participants' perspectives were examined through content analysis of inmates' answers to program Reflection and Response Assignments and interviews. The analyses were conducted according to the constant comparative method. Findings reveal the emergent of eight life-lessons and suggest that this is a promising approach to prison programming for prerelease inmates. This study discusses three approaches to better prepare inmates for a mindset change. The rethink, reform, and reenter approaches help break the traditional cycle of release, reoffend, and return. © The Author(s) 2014.

  15. Rapid temperature jump by infrared diode laser irradiation for patch-clamp studies.

    PubMed

    Yao, Jing; Liu, Beiying; Qin, Feng

    2009-05-06

    Several thermal TRP ion channels have recently been identified. These channels are directly gated by temperature, but the mechanisms have remained elusive. Studies of their temperature gating have been impeded by lack of methods for rapid alteration of temperature in live cells. As a result, only measurements of steady-state properties have been possible. To solve the problem, we have developed an optical approach that uses recently available infrared diode lasers as heat sources. By restricting laser irradiation around a single cell, our approach can produce constant temperature jumps over 50 degrees C in submilliseconds. Experiments with several heat-gated ion channels (TRPV1-3) show its applicability for rapid temperature perturbation in both single cells and membrane patches. Compared with other laser heating approaches such as those by Raman-shifting of the Nd:YAG fundamentals, our approach has the advantage of being cost effective and applicable to live cells while providing an adequate resolution for time-resolved detection of channel activation.

  16. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach

    PubMed Central

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges

    2013-01-01

    Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922

  17. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach.

    PubMed

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges

    2013-10-01

    Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.

  18. The effects of varied versus constant high-, medium-, and low-preference stimuli on performance.

    PubMed

    Wine, Byron; Wilder, David A

    2009-01-01

    The purpose of the current study was to compare the delivery of varied versus constant high-, medium-, and low-preference stimuli on performance of 2 adults on a computer-based task in an analogue employment setting. For both participants, constant delivery of the high-preference stimulus produced the greatest increases in performance over baseline; the varied presentation produced performance comparable to constant delivery of medium-preference stimuli. Results are discussed in terms of their implications for the selection and delivery of stimuli as part of employee performance-improvement programs in the field of organizational behavior management.

  19. The Conformal Factor and the Cosmological Constant

    NASA Astrophysics Data System (ADS)

    Giddings, Steven B.

    The issue of the conformal factor in quantum gravity is examined for Lorentzian signature spacetimes. In Euclidean signature, the “wrong” sign of the conformal action makes the path integral undefined, but in Lorentzian signature this sign is tied to the instability of gravity and once this is accounted for the path integral should be well-defined. In this approach it is not obvious that the Baum-Hawking-Coleman mechanism for suppression of the cosmological constant functions. It is conceivable that since the multiuniverse system exhibits an instability for positive cosmological constant, the dynamics should force the system to zero cosmological constant.

  20. Design of high-linear CMOS circuit using a constant transconductance method for gamma-ray spectroscopy system

    NASA Astrophysics Data System (ADS)

    Jung, I. I.; Lee, J. H.; Lee, C. S.; Choi, Y.-W.

    2011-02-01

    We propose a novel circuit to be applied to the front-end integrated circuits of gamma-ray spectroscopy systems. Our circuit is designed as a type of current conveyor (ICON) employing a constant- gm (transconductance) method which can significantly improve the linearity in the amplified signals by using a large time constant and the time-invariant characteristics of an amplifier. The constant- gm method is obtained by a feedback control which keeps the transconductance of the input transistor constant. To verify the performance of the propose circuit, the time constant variations for the channel resistances are simulated with the TSMC 0.18 μm transistor parameters using HSPICE, and then compared with those of a conventional ICON. As a result, the proposed ICON shows only 0.02% output linearity variation and 0.19% time constant variation for the input amplitude up to 100 mV. These are significantly small values compared to a conventional ICON's 1.39% and 19.43%, respectively, for the same conditions.

  1. An Improved Statistical Solution for Global Seismicity by the HIST-ETAS Approach

    NASA Astrophysics Data System (ADS)

    Chu, A.; Ogata, Y.; Katsura, K.

    2010-12-01

    For long-term global seismic model fitting, recent work by Chu et al. (2010) applied the spatial-temporal ETAS model (Ogata 1998) and analyzed global data partitioned into tectonic zones based on geophysical characteristics (Bird 2003), and it has shown tremendous improvements of model fitting compared with one overall global model. While the ordinary ETAS model assumes constant parameter values across the complete region analyzed, the hierarchical space-time ETAS model (HIST-ETAS, Ogata 2004) is a newly introduced approach by proposing regional distinctions of the parameters for more accurate seismic prediction. As the HIST-ETAS model has been fit to regional data of Japan (Ogata 2010), our work applies the model to describe global seismicity. Employing the Akaike's Bayesian Information Criterion (ABIC) as an assessment method, we compare the MLE results with zone divisions considered to results obtained by an overall global model. Location dependent parameters of the model and Gutenberg-Richter b-values are optimized, and seismological interpretations are discussed.

  2. Monte Carlo simulation on the effect of different approaches to thalassaemia on gene frequency.

    PubMed

    Habibzadeh, F; Yadollahie, M

    2006-01-01

    We used computer simulation to determine variation in gene, heterozygous and homozygous frequencies induced by 4 different approaches to thalassaemia. These were: supportive therapy only; treat homozygous patients with a hypothetical modality phenotypically only; abort all homozygous fetuses; and prevent marriage between gene carriers. Gene frequency becomes constant with the second or the fourth strategy, and falls over time with the first or the third strategy. Heterozygous frequency varies in parallel with gene frequency. Using the first strategy, homozygous frequency falls over time; with the second strategy it becomes constant; and with the third and fourth strategies it falls to zero after the first generation. No matter which strategy is used, the population gene frequency, in the worst case, will remain constant over time.

  3. Characterization of solution-phase drug-protein interactions by ultrafast affinity extraction.

    PubMed

    Beeram, Sandya R; Zheng, Xiwei; Suh, Kyungah; Hage, David S

    2018-03-03

    A number of tools based on high-performance affinity separations have been developed for studying drug-protein interactions. An example of one recent approach is ultrafast affinity extraction. This method has been employed to examine the free (or non-bound) fractions of drugs and other solutes in simple or complex samples that contain soluble binding agents. These free fractions have also been used to determine the binding constants and rate constants for the interactions of drugs with these soluble agents. This report describes the general principles of ultrafast affinity extraction and the experimental conditions under which it can be used to characterize such interactions. This method will be illustrated by utilizing data that have been obtained when using this approach to measure the binding and dissociation of various drugs with the serum transport proteins human serum albumin and alpha 1 -acid glycoprotein. A number of practical factors will be discussed that should be considered in the design and optimization of this approach for use with single-column or multi-column systems. Techniques will also be described for analyzing the resulting data for the determination of free fractions, rate constants and binding constants. In addition, the extension of this method to complex samples, such as clinical specimens, will be considered. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Comparative study of substrate and product binding to the human ABO(H) blood group glycosyltransferases.

    PubMed

    Soya, Naoto; Shoemaker, Glen K; Palcic, Monica M; Klassen, John S

    2009-11-01

    The first comparative thermodynamic study of the human blood group glycosyltransferases, alpha-(1-->3)-N-acetylgalactosaminyltransferase (GTA) and alpha-(1-->3)-galactosyltransferase (GTB), interacting with donor substrates, donor and acceptor analogs, and trisaccharide products in vitro is reported. The binding constants, measured at 24 degrees C with the direct electrospray ionization mass spectrometry (ES-MS) assay, provide new insights into these model GTs and their interactions with substrate and product. Notably, the recombinant forms of GTA and GTB used in this study are shown to exist as homodimers, stabilized by noncovalent interactions at neutral pH. In the absence of divalent metal ion, neither GTA nor GTB exhibits any appreciable affinity for its native donors (UDP-GalNAc, UDP-Gal). Upon introduction of Mn(2+), both donors undergo enzyme-catalyzed hydrolysis in the presence of either GTA or GTB. Hydrolysis of UDP-GalNAc in the presence of GTA proceeds very rapidly under the solution conditions investigated and a binding constant could not be directly measured. In contrast, the rate of hydrolysis of UDP-Gal in the presence of GTB is significantly slower and, utilizing a modified approach to analyze the ES-MS data, a binding constant of 2 x 10(4) M(-1) was established. GTA and GTB bind the donor analogs UDP-GlcNAc, UDP-Glc with affinities similar to those measured for UDP-Gal and UDP-GalNAc (GTB only), suggesting that the native donors and donor analogs bind to the GTA and GTB through similar interactions. The binding constant determined for GTA and UDP-GlcNAc (approximately 1 x 10(4) M(-1)), therefore, provides an estimate for the binding constant for GTA and UDP-GalNAc. Binding of GTA and GTB with the A and B trisaccharide products was also investigated for the first time. In the absence of UDP and Mn(2+), both GTA and GTB recognize their respective trisaccharide products but with a low affinity approximately 10(3) M(-1); the presence of UDP and Mn(2+) has no effect on A trisaccharide binding but precludes B-trisaccharide binding.

  5. OpenMEEG: opensource software for quasistatic bioelectromagnetics.

    PubMed

    Gramfort, Alexandre; Papadopoulo, Théodore; Olivi, Emmanuel; Clerc, Maureen

    2010-09-06

    Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages. We have run a benchmark study in the field of electro- and magneto-encephalography, in order to compare the accuracy of OpenMEEG with other freely distributed forward solvers. We considered spherical models, for which analytical solutions exist, and we designed randomized meshes to assess the variability of the accuracy. Two measures were used to characterize the accuracy. the Relative Difference Measure and the Magnitude ratio. The comparisons were run, either with a constant number of mesh nodes, or a constant number of unknowns across methods. Computing times were also compared. We observed more pronounced differences in accuracy in electroencephalography than in magnetoencephalography. The methods could be classified in three categories: the linear collocation methods, that run very fast but with low accuracy, the linear collocation methods with isolated skull approach for which the accuracy is improved, and OpenMEEG that clearly outperforms the others. As far as speed is concerned, OpenMEEG is on par with the other methods for a constant number of unknowns, and is hence faster for a prescribed accuracy level. This study clearly shows that OpenMEEG represents the state of the art for forward computations. Moreover, our software development strategies have made it handy to use and to integrate with other packages. The bioelectromagnetic research community should therefore be able to benefit from OpenMEEG with a limited development effort.

  6. OpenMEEG: opensource software for quasistatic bioelectromagnetics

    PubMed Central

    2010-01-01

    Background Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages. Methods We have run a benchmark study in the field of electro- and magneto-encephalography, in order to compare the accuracy of OpenMEEG with other freely distributed forward solvers. We considered spherical models, for which analytical solutions exist, and we designed randomized meshes to assess the variability of the accuracy. Two measures were used to characterize the accuracy. the Relative Difference Measure and the Magnitude ratio. The comparisons were run, either with a constant number of mesh nodes, or a constant number of unknowns across methods. Computing times were also compared. Results We observed more pronounced differences in accuracy in electroencephalography than in magnetoencephalography. The methods could be classified in three categories: the linear collocation methods, that run very fast but with low accuracy, the linear collocation methods with isolated skull approach for which the accuracy is improved, and OpenMEEG that clearly outperforms the others. As far as speed is concerned, OpenMEEG is on par with the other methods for a constant number of unknowns, and is hence faster for a prescribed accuracy level. Conclusions This study clearly shows that OpenMEEG represents the state of the art for forward computations. Moreover, our software development strategies have made it handy to use and to integrate with other packages. The bioelectromagnetic research community should therefore be able to benefit from OpenMEEG with a limited development effort. PMID:20819204

  7. Short-term standard litter decomposition across three different ecosystems in middle taiga zone of West Siberia

    NASA Astrophysics Data System (ADS)

    Filippova, Nina V.; Glagolev, Mikhail V.

    2018-03-01

    The method of standard litter (tea) decomposition was implemented to compare decomposition rate constants (k) between different peatland ecosystems and coniferous forests in the middle taiga zone of West Siberia (near Khanty-Mansiysk). The standard protocol of TeaComposition initiative was used to make the data usable for comparisons among different sites and zonobiomes worldwide. This article sums up the results of short-term decomposition (3 months) on the local scale. The values of decomposition rate constants differed significantly between three ecosystem types: it was higher in forest compared to bogs, and treed bogs had lower decomposition constant compared to Sphagnum lawns. In general, the decomposition rate constants were close to ones reported earlier for similar climatic conditions and habitats.

  8. Nonadiabatic rate constants for proton transfer and proton-coupled electron transfer reactions in solution: Effects of quadratic term in the vibronic coupling expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soudackov, Alexander V.; Hammes-Schiffer, Sharon

    2015-11-21

    Rate constant expressions for vibronically nonadiabatic proton transfer and proton-coupled electron transfer reactions are presented and analyzed. The regimes covered include electronically adiabatic and nonadiabatic reactions, as well as high-frequency and low-frequency proton donor-acceptor vibrational modes. These rate constants differ from previous rate constants derived with the cumulant expansion approach in that the logarithmic expansion of the vibronic coupling in terms of the proton donor-acceptor distance includes a quadratic as well as a linear term. The analysis illustrates that inclusion of this quadratic term in the framework of the cumulant expansion framework may significantly impact the rate constants at highmore » temperatures for proton transfer interfaces with soft proton donor-acceptor modes that are associated with small force constants and weak hydrogen bonds. The effects of the quadratic term may also become significant in these regimes when using the vibronic coupling expansion in conjunction with a thermal averaging procedure for calculating the rate constant. In this case, however, the expansion of the coupling can be avoided entirely by calculating the couplings explicitly for the range of proton donor-acceptor distances sampled. The effects of the quadratic term for weak hydrogen-bonding systems are less significant for more physically realistic models that prevent the sampling of unphysical short proton donor-acceptor distances. Additionally, the rigorous relation between the cumulant expansion and thermal averaging approaches is clarified. In particular, the cumulant expansion rate constant includes effects from dynamical interference between the proton donor-acceptor and solvent motions and becomes equivalent to the thermally averaged rate constant when these dynamical effects are neglected. This analysis identifies the regimes in which each rate constant expression is valid and thus will be important for future applications to proton transfer and proton-coupled electron transfer in chemical and biological processes.« less

  9. Gas-phase geometry optimization of biological molecules as a reasonable alternative to a continuum environment description: fact, myth, or fiction?

    PubMed

    Sousa, Sérgio Filipe; Fernandes, Pedro Alexandrino; Ramos, Maria João

    2009-12-31

    Gas-phase optimization of single biological molecules and of small active-site biological models has become a standard approach in first principles computational enzymology. The important role played by the surrounding environment (solvent, enzyme, both) is normally only accounted for through higher-level single point energy calculations performed using a polarizable continuum model (PCM) and an appropriate dielectric constant with the gas-phase-optimized geometries. In this study we analyze this widely used approximation, by comparing gas-phase-optimized geometries with geometries optimized with different PCM approaches (and considering different dielectric constants) for a representative data set of 20 very important biological molecules--the 20 natural amino acids. A total of 323 chemical bonds and 469 angles present in standard amino acid residues were evaluated. The results show that the use of gas-phase-optimized geometries can in fact be quite a reasonable alternative to the use of the more computationally intensive continuum optimizations, providing a good description of bond lengths and angles for typical biological molecules, even for charged amino acids, such as Asp, Glu, Lys, and Arg. This approximation is particularly successful if the protonation state of the biological molecule could be reasonably described in vacuum, a requirement that was already necessary in first principles computational enzymology.

  10. Carbonic acid ionization and the stability of sodium bicarbonate and carbonate ion pairs to 200 °C - A potentiometric and spectrophotometric study

    NASA Astrophysics Data System (ADS)

    Stefánsson, Andri; Bénézeth, Pascale; Schott, Jacques

    2013-11-01

    Carbonic acid ionization and sodium bicarbonate and carbonate ion pair formation constants have been experimentally determined in dilute hydrothermal solutions to 200 °C. Two experimental approaches were applied, potentiometric acid-base titrations at 10-60 °C and spectrophotometric pH measurements using the pH indicators, 2-napthol and 4-nitrophenol, at 25-200 °C. At a given temperature, the first and second ionization constants of carbonic acid (K1, K2) and the ion pair formation constants for NaHCO(aq)(K) and NaCO3-(aq)(K) were simultaneously fitted to the data. Results of this study compare well with previously determined values of K1 and K2. The NaHCO(aq) and NaCO3-(aq) ion pair formation constants vary between 25 and 200 °C having values of logK=-0.18 to 0.58 and logK=1.01 to 2.21, respectively. These ion pairs are weak at low-temperatures but become increasingly important with increasing temperature under neutral to alkaline conditions in moderately dilute to concentrated NaCl solutions, with NaCO3-(aq) predominating over CO32-(aq) in ⩾0.1 M NaCl solution at temperatures above 100 °C. The results demonstrate that NaCl cannot be considered as an inert (non-complexing) electrolyte in aqueous carbon dioxide containing solutions at elevated temperatures.

  11. Material characterization of structural adhesives in the lap shear mode

    NASA Technical Reports Server (NTRS)

    Sancaktar, E.; Schenck, S. C.

    1983-01-01

    A general method for characterizing structual adhesives in the bonded lap shear mode is proposed. Two approaches in the form of semiempirical and theoretical approaches are used. The semiempirical approach includes Ludwik's and Zhurkov's equations to describe respectively, the failure stresses in the constant strain rate and constant stress loading modes with the inclusion of the temperature effects. The theoretical approach is used to describe adhesive shear stress-strain behavior with the use of viscoelastic or nonlinear elastic constitutive equations. Two different model adhesives are used in the single lap shear mode with titanium adherends. These adhesives (one of which was developed at NASA Langley Research Center) are currently considered by NASA for possible aerospace applications. Use of different model adhesives helps in assessment of the generality of the method.

  12. From laws of inference to protein folding dynamics.

    PubMed

    Tseng, Chih-Yuan; Yu, Chun-Ping; Lee, H C

    2010-08-01

    Protein folding dynamics is one of major issues constantly investigated in the study of protein functions. The molecular dynamic (MD) simulation with the replica exchange method (REM) is a common theoretical approach considered. Yet a trade-off in applying the REM is that the dynamics toward the native configuration in the simulations seems lost. In this work, we show that given REM-MD simulation results, protein folding dynamics can be directly derived from laws of inference. The applicability of the resulting approach, the entropic folding dynamics, is illustrated by investigating a well-studied Trp-cage peptide. Our results are qualitatively comparable with those from other studies. The current studies suggest that the incorporation of laws of inference and physics brings in a comprehensive perspective on exploring the protein folding dynamics.

  13. Pressure induced structural phase transition from NaCl-type (B1) to CsCl-type (B2) structure in sodium chloride

    NASA Astrophysics Data System (ADS)

    Jain, Aayushi; Dixit, R. C.

    2018-05-01

    Pressure induced structural phase transition of NaCl-type (B1) to CsCl-type (B2) structure in Sodium Chloride NaCl are presented. An effective interionic interaction potential (EIOP) with long range Coulomb, van der Waals (vdW) interaction and the short-range repulsive interaction upto second-neighbor ions within the Hafemeister and Flygare approach with modified ionic charge is reported here. The reckon value of the phase transition pressure (Pt) and the magnitude of the discontinuity in volume at the transition pressure are compatible as compared with reported data. The variations of elastic constants and their combinations with pressure follow ordered behavior. The present approach has also succeeded in predicting the Born and relative stability criteria.

  14. Squarylium-triazine dyad as a highly sensitive photoradical generator for red light.

    PubMed

    Kawamura, Koichi; Schmitt, Julien; Barnet, Maxime; Salmi, Hanene; Ley, Christian; Allonas, Xavier

    2013-09-16

    New dyads, based on squarylium dye and substituted-triazine, were synthesized that exhibit an intramolecular photodissociative electron-transfer reaction. The compounds were used as a red-light photoradical generator. The photochemical activity of the dyad was compared to the corresponding unlinked systems (S+T) by determining the rate constant of electron transfer. The efficiency of the radical generation from the dyad compared to the unlinked system was demonstrated by measuring the maximum rate of free radical polymerization of acrylates in film. An excellent relationship between the rate of electron transfer and the rate of polymerization was found, evidencing the interest of this new approach to efficiently produce radicals under red light. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Lattice Boltzmann simulations of heat transfer in fully developed periodic incompressible flows

    NASA Astrophysics Data System (ADS)

    Wang, Zimeng; Shang, Helen; Zhang, Junfeng

    2017-06-01

    Flow and heat transfer in periodic structures are of great interest for many applications. In this paper, we carefully examine the periodic features of fully developed periodic incompressible thermal flows, and incorporate them in the lattice Boltzmann method (LBM) for flow and heat transfer simulations. Two numerical approaches, the distribution modification (DM) approach and the source term (ST) approach, are proposed; and they can both be used for periodic thermal flows with constant wall temperature (CWT) and surface heat flux boundary conditions. However, the DM approach might be more efficient, especially for CWT systems since the ST approach requires calculations of the streamwise temperature gradient at all lattice nodes. Several example simulations are conducted, including flows through flat and wavy channels and flows through a square array with circular cylinders. Results are compared to analytical solutions, previous studies, and our own LBM calculations using different simulation techniques (i.e., the one-module simulation vs. the two-module simulation, and the DM approach vs. the ST approach) with good agreement. These simple, however, representative simulations demonstrate the accuracy and usefulness of our proposed LBM methods for future thermal periodic flow simulations.

  16. Conjugate Acid-Base Pairs, Free Energy, and the Equilibrium Constant

    ERIC Educational Resources Information Center

    Beach, Darrell H.

    1969-01-01

    Describes a method of calculating the equilibrium constant from free energy data. Values of the equilibrium constants of six Bronsted-Lowry reactions calculated by the author's method and by a conventional textbook method are compared. (LC)

  17. A Computational Framework for Analyzing Stochasticity in Gene Expression

    PubMed Central

    Sherman, Marc S.; Cohen, Barak A.

    2014-01-01

    Stochastic fluctuations in gene expression give rise to distributions of protein levels across cell populations. Despite a mounting number of theoretical models explaining stochasticity in protein expression, we lack a robust, efficient, assumption-free approach for inferring the molecular mechanisms that underlie the shape of protein distributions. Here we propose a method for inferring sets of biochemical rate constants that govern chromatin modification, transcription, translation, and RNA and protein degradation from stochasticity in protein expression. We asked whether the rates of these underlying processes can be estimated accurately from protein expression distributions, in the absence of any limiting assumptions. To do this, we (1) derived analytical solutions for the first four moments of the protein distribution, (2) found that these four moments completely capture the shape of protein distributions, and (3) developed an efficient algorithm for inferring gene expression rate constants from the moments of protein distributions. Using this algorithm we find that most protein distributions are consistent with a large number of different biochemical rate constant sets. Despite this degeneracy, the solution space of rate constants almost always informs on underlying mechanism. For example, we distinguish between regimes where transcriptional bursting occurs from regimes reflecting constitutive transcript production. Our method agrees with the current standard approach, and in the restrictive regime where the standard method operates, also identifies rate constants not previously obtainable. Even without making any assumptions we obtain estimates of individual biochemical rate constants, or meaningful ratios of rate constants, in 91% of tested cases. In some cases our method identified all of the underlying rate constants. The framework developed here will be a powerful tool for deducing the contributions of particular molecular mechanisms to specific patterns of gene expression. PMID:24811315

  18. Structural Influence of Dynamics of Bottom Loads

    DTIC Science & Technology

    2014-02-10

    using the Numerette research craft, are underway. Early analytic research on slamming was done by von Karman [5] using a momentum approach, and by...pressure q{x,t) as two constant pressures, qi and qj, traveling at a constant speed c. Using the Euler- Bernoulli beam assumptions the governing

  19. An analytical approach to the external force-free motion of pendulums on surfaces of constant curvature

    NASA Astrophysics Data System (ADS)

    Rubio, Rafael M.; Salamanca, Juan J.

    2018-07-01

    The dynamics of external force free motion of pendulums on surfaces of constant Gaussian curvature is addressed when the pivot moves along a geodesic obtaining the Lagrangian of the system. As an application it is possible the study of elastic and quantum pendulums.

  20. ESTIMATION OF MICROBIAL REDUCTIVE TRANSFORMATION RATES FOR CHLORINATED BENZENES AND PHENOLS USING A QUANTITATIVE STRUCTURE-ACTIVITY RELATIONSHIP APPROACH

    EPA Science Inventory

    A set of literature data was used to derive several quantitative structure-activity relationships (QSARs) to predict the rate constants for the microbial reductive dehalogenation of chlorinated aromatics. Dechlorination rate constants for 25 chloroaromatics were corrected for th...

  1. Material and energy recovery in integrated waste management systems: a life-cycle costing approach.

    PubMed

    Massarutto, Antonio; de Carli, Alessandro; Graffi, Matteo

    2011-01-01

    A critical assumption of studies assessing comparatively waste management options concerns the constant average cost for selective collection regardless the source separation level (SSL) reached, and the neglect of the mass constraint. The present study compares alternative waste management scenarios through the development of a desktop model that tries to remove the above assumption. Several alternative scenarios based on different combinations of energy and materials recovery are applied to two imaginary areas modelled in order to represent a typical Northern Italian setting. External costs and benefits implied by scenarios are also considered. Scenarios are compared on the base of the full cost for treating the total waste generated in the area. The model investigates the factors that influence the relative convenience of alternative scenarios. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Electron attachment to CF{sub 3} and CF{sub 3}Br at temperatures up to 890 K: Experimental test of the kinetic modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shuman, Nicholas S.; Miller, Thomas M.; Viggiano, Albert A.

    Thermal rate constants and product branching fractions for electron attachment to CF{sub 3}Br and the CF{sub 3} radical have been measured over the temperature range 300-890 K, the upper limit being restricted by thermal decomposition of CF{sub 3}Br. Both measurements were made in Flowing Afterglow Langmuir Probe apparatuses; the CF{sub 3}Br measurement was made using standard techniques, and the CF{sub 3} measurement using the Variable Electron and Neutral Density Attachment Mass Spectrometry technique. Attachment to CF{sub 3}Br proceeds exclusively by the dissociative channel yielding Br{sup -}, with a rate constant increasing from 1.1 Multiplication-Sign 10{sup -8} cm{sup 3} s{sup -1}more » at 300 K to 5.3 Multiplication-Sign 10{sup -8} cm{sup 3} s{sup -1} at 890 K, somewhat lower than previous data at temperatures up to 777 K. CF{sub 3} attachment proceeds through competition between associative attachment yielding CF{sub 3}{sup -} and dissociative attachment yielding F{sup -}. Prior data up to 600 K showed the rate constant monotonically increasing, with the partial rate constant of the dissociative channel following Arrhenius behavior; however, extrapolation of the data using a recently proposed kinetic modeling approach predicted the rate constant to turn over at higher temperatures, despite being only {approx}5% of the collision rate. The current data agree well with the previous kinetic modeling extrapolation, providing a demonstration of the predictive capabilities of the approach.« less

  3. Evaluation of shoulder function in clavicular fracture patients after six surgical procedures based on a network meta-analysis.

    PubMed

    Huang, Shou-Guo; Chen, Bo; Lv, Dong; Zhang, Yong; Nie, Feng-Feng; Li, Wei; Lv, Yao; Zhao, Huan-Li; Liu, Hong-Mei

    2017-01-01

    Purpose Using a network meta-analysis approach, our study aims to develop a ranking of the six surgical procedures, that is, Plate, titanium elastic nail (TEN), tension band wire (TBW), hook plate (HP), reconstruction plate (RP) and Knowles pin, by comparing the post-surgery constant shoulder scores in patients with clavicular fracture (CF). Methods A comprehensive search of electronic scientific literature databases was performed to retrieve publications investigating surgical procedures in CF, with the stringent eligible criteria, and clinical experimental studies of high quality and relevance to our area of interest were selected for network meta-analysis. Statistical analyses were conducted using Stata 12.0. Results A total of 19 studies met our inclusion criteria were eventually enrolled into our network meta-analysis, representing 1164 patients who had undergone surgical procedures for CF (TEN group = 240; Plate group = 164; TBW group  =  180; RP group  =  168; HP group  =  245; Knowles pin group  =  167). The network meta-analysis results revealed that RP significantly improved constant shoulder score in patients with CF when compared with TEN, and the post-operative constant shoulder scores in patients with CF after Plate, TBW, HP, Knowles pin and TEN were similar with no statistically significant differences. The treatment relative ranking of predictive probabilities of constant shoulder scores in patients with CF after surgery revealed the surface under the cumulative ranking curves (SUCRA) value is the highest in RP. Conclusion The current network meta-analysis suggests that RP may be the optimum surgical treatment among six inventions for patients with CF, and it can improve the shoulder score of patients with CF. Implications for Rehabilitation RP improves shoulder joint function after surgical procedure. RP achieves stability with minimal complications after surgery. RP may be the optimum surgical treatment for rehabilitation of patients with CF.

  4. Relativistic effects on the NMR parameters of Si, Ge, Sn, and Pb alkynyl compounds: Scalar versus spin-orbit effects

    NASA Astrophysics Data System (ADS)

    Demissie, Taye B.

    2017-11-01

    The NMR chemical shifts and indirect spin-spin coupling constants of 12 molecules containing 29Si, 73Ge, 119Sn, and 207Pb [X(CCMe)4, Me2X(CCMe)2, and Me3XCCH] are presented. The results are obtained from non-relativistic as well as two- and four-component relativistic density functional theory (DFT) calculations. The scalar and spin-orbit relativistic contributions as well as the total relativistic corrections are determined. The main relativistic effect in these molecules is not due to spin-orbit coupling but rather to the scalar relativistic contraction of the s-shells. The correlation between the calculated and experimental indirect spin-spin coupling constants showed that the four-component relativistic density functional theory (DFT) approach using the Perdew's hybrid scheme exchange-correlation functional (PBE0; using the Perdew-Burke-Ernzerhof exchange and correlation functionals) gives results in good agreement with experimental values. The indirect spin-spin coupling constants calculated using the spin-orbit zeroth order regular approximation together with the hybrid PBE0 functional and the specially designed J-coupling (JCPL) basis sets are in good agreement with the results obtained from the four-component relativistic calculations. For the coupling constants involving the heavy atoms, the relativistic corrections are of the same order of magnitude compared to the non-relativistically calculated results. Based on the comparisons of the calculated results with available experimental values, the best results for all the chemical shifts and non-existing indirect spin-spin coupling constants for all the molecules are reported, hoping that these accurate results will be used to benchmark future DFT calculations. The present study also demonstrates that the four-component relativistic DFT method has reached a level of maturity that makes it a convenient and accurate tool to calculate indirect spin-spin coupling constants of "large" molecular systems involving heavy atoms.

  5. Motional timescale predictions by molecular dynamics simulations: Case study using proline and hydroxyproline sidechain dynamics

    PubMed Central

    Aliev, Abil E; Kulke, Martin; Khaneja, Harmeet S; Chudasama, Vijay; Sheppard, Tom D; Lanigan, Rachel M

    2014-01-01

    We propose a new approach for force field optimizations which aims at reproducing dynamics characteristics using biomolecular MD simulations, in addition to improved prediction of motionally averaged structural properties available from experiment. As the source of experimental data for dynamics fittings, we use 13C NMR spin-lattice relaxation times T1 of backbone and sidechain carbons, which allow to determine correlation times of both overall molecular and intramolecular motions. For structural fittings, we use motionally averaged experimental values of NMR J couplings. The proline residue and its derivative 4-hydroxyproline with relatively simple cyclic structure and sidechain dynamics were chosen for the assessment of the new approach in this work. Initially, grid search and simplexed MD simulations identified large number of parameter sets which fit equally well experimental J couplings. Using the Arrhenius-type relationship between the force constant and the correlation time, the available MD data for a series of parameter sets were analyzed to predict the value of the force constant that best reproduces experimental timescale of the sidechain dynamics. Verification of the new force-field (termed as AMBER99SB-ILDNP) against NMR J couplings and correlation times showed consistent and significant improvements compared to the original force field in reproducing both structural and dynamics properties. The results suggest that matching experimental timescales of motions together with motionally averaged characteristics is the valid approach for force field parameter optimization. Such a comprehensive approach is not restricted to cyclic residues and can be extended to other amino acid residues, as well as to the backbone. Proteins 2014; 82:195–215. © 2013 Wiley Periodicals, Inc. PMID:23818175

  6. Measurement of absolute concentrations of individual compounds in metabolite mixtures by gradient-selective time-zero 1H-13C HSQC with two concentration references and fast maximum likelihood reconstruction analysis.

    PubMed

    Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L

    2011-12-15

    Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture.

  7. Quantitative measurement of protein digestion in simulated gastric fluid.

    PubMed

    Herman, Rod A; Korjagin, Valerie A; Schafer, Barry W

    2005-04-01

    The digestibility of novel proteins in simulated gastric fluid is considered to be an indicator of reduced risk of allergenic potential in food, and estimates of digestibility for transgenic proteins expressed in crops are required for making a human-health risk assessment by regulatory authorities. The estimation of first-order rate constants for digestion under conditions of low substrate concentration was explored for two protein substrates (azocoll and DQ-ovalbumin). Data conformed to first-order kinetics, and half-lives were relatively insensitive to significant variations in both substrate and pepsin concentration when high purity pepsin preparations were used. Estimation of digestion efficiency using densitometric measurements of relative protein concentration based on SDS-PAGE corroborated digestion estimates based on measurements of dye or fluorescence release from the labeled substrates. The suitability of first-order rate constants for estimating the efficiency of the pepsin digestion of novel proteins is discussed. Results further support a kinetic approach as appropriate for comparing the digestibility of proteins in simulated gastric fluid.

  8. Quantification of root gravitropic response using a constant stimulus feedback system.

    PubMed

    Wolverton, Chris

    2015-01-01

    Numerous software packages now exist for quantifying root growth responses, most of which analyze a time resolved sequence of images ex post facto. However, few allow for the real-time analysis of growth responses. The system in routine use in our lab allows for real-time growth analysis and couples this to positional feedback to control the stimulus experienced by the responding root. This combination allows us to overcome one of the confounding variables in studies of root gravity response. Seedlings are grown on standard petri plates attached to a vertical rotating stage and imaged using infrared illumination. The angle of a particular region of the root is determined by image analysis, compared to the prescribed angle, and any corrections in positioning are made by controlling a stepper motor. The system allows for the long-term stimulation of a root at a constant angle and yields insights into the gravity perception and transduction machinery not possible with other approaches.

  9. Modeling of monolayer charge-stabilized colloidal crystals with static hexagonal crystal lattice

    NASA Astrophysics Data System (ADS)

    Nagatkin, A. N.; Dyshlovenko, P. E.

    2018-01-01

    The mathematical model of monolayer colloidal crystals of charged hard spheres in liquid electrolyte is proposed. The particles in the monolayer are arranged into the two-dimensional hexagonal crystal lattice. The model enables finding elastic constants of the crystals from the stress-strain dependencies. The model is based on the nonlinear Poisson-Boltzmann differential equation. The Poisson-Boltzmann equation is solved numerically by the finite element method for any spatial configuration. The model has five geometrical and electrical parameters. The model is used to study the crystal with particles comparable in size with the Debye length of the electrolyte. The first- and second-order elastic constants are found for a broad range of densities. The model crystal turns out to be stable relative to small uniform stretching and shearing. It is also demonstrated that the Cauchy relation is not fulfilled in the crystal. This means that the pair effective interaction of any kind is not sufficient to proper model the elasticity of colloids within the one-component approach.

  10. Ab initio calculation of the deprotonation constants of an atomistically defined nanometer-sized, aluminium hydroxide oligomer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wander, Matthew C. F.; Shuford, Kevin L.; Rustad, James R.

    Aluminium possesses significant and diverse chemistry. Numerous compounds have been defined, and the elucidation of their chemistry is of significant geochemical interest. In this paper, a brucite-like, eight-aluminium aqueous cluster is modelled with density functional theory to identify its primary site of deprotonation and the associated pK(a) constant using both explicit (a full first solvent shell) and implicit solvent. Two methods for calculating the pK(a) are compared. We found that a bond density approach is better than a direct energy calculation for ions with large charge and high symmetry. The terminal aluminium atoms have equatorial ligated waters that in solventmore » have one long O-H bond. This site is more reactive than any of the other protons on the particle. Insights into the experimental crystal structure and Bader's Atoms in Molecules density analysis are presented as routes to reduce the computational time required for the identification of protonation sites.« less

  11. Amplifiers dedicated for large area SiC photodiodes

    NASA Astrophysics Data System (ADS)

    Doroz, P.; Duk, M.; Korwin-Pawlowski, M. L.; Borecki, M.

    2016-09-01

    Large area SiC photodiodes find applications in optoelectronic sensors working at special conditions. These conditions include detection of UV radiation in harsh environment. Moreover, the mentioned sensors have to be selective and resistant to unwanted signals. For this purpose, the modulation of light at source unit and the rejection of constant current and low frequency component of signal at detector unit are used. The popular frequency used for modulation in such sensor is 1kHz. The large area photodiodes are characterized by a large capacitance and low shunt resistance that varies with polarization of the photodiode and can significantly modify the conditions of signal pre-amplification. In this paper two pre-amplifiers topology are analyzed: the transimpedance amplifier and the non-inverting voltage to voltage amplifier with negative feedback. The feedback loops of both pre-amplifiers are equipped with elements used for initial constant current and low frequency signals rejections. Both circuits are analyzed and compared using simulation and experimental approaches.

  12. A simple and efficient shear-flexible plate bending element

    NASA Technical Reports Server (NTRS)

    Chaudhuri, Reaz A.

    1987-01-01

    A shear-flexible triangular element formulation, which utilizes an assumed quadratic displacement potential energy approach and is numerically integrated using Gauss quadrature, is presented. The Reissner/Mindlin hypothesis of constant cross-sectional warping is directly applied to the three-dimensional elasticity theory to obtain a moderately thick-plate theory or constant shear-angle theory (CST), wherein the middle surface is no longer considered to be the reference surface and the two rotations are replaced by the two in-plane displacements as nodal variables. The resulting finite-element possesses 18 degrees of freedom (DOF). Numerical results are obtained for two different numerical integration schemes and a wide range of meshes and span-to-thickness ratios. These, when compared with available exact, series or finite-element solutions, demonstrate accuracy and rapid convergence characteristics of the present element. This is especially true in the case of thin to very thin plates, when the present element, used in conjunction with the reduced integration scheme, outperforms its counterpart, based on discrete Kirchhoff constraint theory (DKT).

  13. Progress in Developing a New Field-theoretical Crossover Equation-of-State

    NASA Technical Reports Server (NTRS)

    Rudnick, Joseph; Barmatz, M.; Zhong, Fang

    2003-01-01

    A new field-theoretical crossover equation-of-state model is being developed. This model of a liquid-gas critical point provides a bridge between the asymptotic equation-of-state behavior close to the transition, obtained by the Guida and Zinn-Justin parametric model [J. Phys. A: Math. Gen. 31, 8103 (1998)], and the expected mean field behavior farther away. The crossover is based on the beta function for the renormalized fourth-order coupling constant and incorporates the correct crossover exponents and critical amplitude ratios in both regimes. A crossover model is now being developed that is consistent with predictions along the critical isochore and along the coexistence curve of the minimal subtraction renormalization approach developed by Dohm and co-workers and recently applied to the O(1) universality class [Phys. Rev. E, 67, 021106 (2003)]. Experimental measurements of the heat capacity at constant volume, isothermal susceptibility, and coexistence curve near the He-3 critical point are being compared to the predictions of this model. The results of these comparisons will be presented.

  14. Theoretical analysis of the structural phase transformation from B3 to B1 in BeO under high pressure

    NASA Astrophysics Data System (ADS)

    Jain, Arvind; Verma, Saligram; Nagarch, R. K.; Shah, S.; Kaurav, Netram

    2018-05-01

    We have performed the phase transformation and elastic properties of BeO at high pressure by formulating effective interionic interaction potential. The elastic constants, including the long-range Coulomb and van der Waals (vdW) interactions and the short-range repulsive interaction of up to second-neighbor ions within the Hafemeister and Flygare approach, are derived. Assuming that both the ions are polarizable, we employed the Slater-Kirkwood variational method to estimate the vdW coefficients, a structural phase transition (Pt) from ZnS structure (B3) to NaCl structure (B1) at 108 GPa has been predicted for BeO. The estimated value of the phase transition pressure (Pt) and the magnitude of the discontinuity in volume at the transition pressure are consistent as compared to the theoretical data. The variations of elastic constants with pressure follow a systematic trend identical to that observed in others compounds of ZnS type structure family.

  15. Calculation of kinetic rate constants from thermodynamic data

    NASA Technical Reports Server (NTRS)

    Marek, C. John

    1995-01-01

    A new scheme for relating the absolute value for the kinetic rate constant k to the thermodynamic constant Kp is developed for gases. In this report the forward and reverse rate constants are individually related to the thermodynamic data. The kinetic rate constants computed from thermodynamics compare well with the current kinetic rate constants. This method is self consistent and does not have extensive rules. It is first demonstrated and calibrated by computing the HBr reaction from H2 and Br2. This method then is used on other reactions.

  16. Generating visual flickers for eliciting robust steady-state visual evoked potentials at flexible frequencies using monitor refresh rate.

    PubMed

    Nakanishi, Masaki; Wang, Yijun; Wang, Yu-Te; Mitsukura, Yasue; Jung, Tzyy-Ping

    2014-01-01

    In the study of steady-state visual evoked potentials (SSVEPs), it remains a challenge to present visual flickers at flexible frequencies using monitor refresh rate. For example, in an SSVEP-based brain-computer interface (BCI), it is difficult to present a large number of visual flickers simultaneously on a monitor. This study aims to explore whether or how a newly proposed frequency approximation approach changes signal characteristics of SSVEPs. At 10 Hz and 12 Hz, the SSVEPs elicited using two refresh rates (75 Hz and 120 Hz) were measured separately to represent the approximation and constant-period approaches. This study compared amplitude, signal-to-noise ratio (SNR), phase, latency, scalp distribution, and frequency detection accuracy of SSVEPs elicited using the two approaches. To further prove the efficacy of the approximation approach, this study implemented an eight-target BCI using frequencies from 8-15 Hz. The SSVEPs elicited by the two approaches were found comparable with regard to all parameters except amplitude and SNR of SSVEPs at 12 Hz. The BCI obtained an averaged information transfer rate (ITR) of 95.0 bits/min across 10 subjects with a maximum ITR of 120 bits/min on two subjects, the highest ITR reported in the SSVEP-based BCIs. This study clearly showed that the frequency approximation approach can elicit robust SSVEPs at flexible frequencies using monitor refresh rate and thereby can largely facilitate various SSVEP-related studies in neural engineering and visual neuroscience.

  17. Logistic and Multiple Regression: A Two-Pronged Approach to Accurately Estimate Cost Growth in Major DoD Weapon Systems

    DTIC Science & Technology

    2004-03-01

    Breusch - Pagan test for constant variance of the residuals. Using Microsoft Excel® we calculate a p-value of 0.841237. This high p-value, which is above...our alpha of 0.05, indicates that our residuals indeed pass the Breusch - Pagan test for constant variance. In addition to the assumption tests , we...Wilk Test for Normality – Support (Reduced) Model (OLS) Finally, we perform a Breusch - Pagan test for constant variance of the residuals. Using

  18. A kinetic study of jack-bean urease denaturation by a new dithiocarbamate bismuth compound

    NASA Astrophysics Data System (ADS)

    Menezes, D. C.; Borges, E.; Torres, M. F.; Braga, J. P.

    2012-10-01

    A kinetic study concerning enzymatic inhibitory effect of a new bismuth dithiocarbamate complex on jack-bean urease is reported. A neural network approach is used to solve the ill-posed inverse problem arising from numerical treatment of the subject. A reaction mechanism for the urease denaturation process is proposed and the rate constants, relaxation time constants, equilibrium constants, activation Gibbs free energies for each reaction step and Gibbs free energies for the transition species are determined.

  19. Comparison of methods for estimating the attributable risk in the context of survival analysis.

    PubMed

    Gassama, Malamine; Bénichou, Jacques; Dartois, Laureen; Thiébaut, Anne C M

    2017-01-23

    The attributable risk (AR) measures the proportion of disease cases that can be attributed to an exposure in the population. Several definitions and estimation methods have been proposed for survival data. Using simulations, we compared four methods for estimating AR defined in terms of survival functions: two nonparametric methods based on Kaplan-Meier's estimator, one semiparametric based on Cox's model, and one parametric based on the piecewise constant hazards model, as well as one simpler method based on estimated exposure prevalence at baseline and Cox's model hazard ratio. We considered a fixed binary exposure with varying exposure probabilities and strengths of association, and generated event times from a proportional hazards model with constant or monotonic (decreasing or increasing) Weibull baseline hazard, as well as from a nonproportional hazards model. We simulated 1,000 independent samples of size 1,000 or 10,000. The methods were compared in terms of mean bias, mean estimated standard error, empirical standard deviation and 95% confidence interval coverage probability at four equally spaced time points. Under proportional hazards, all five methods yielded unbiased results regardless of sample size. Nonparametric methods displayed greater variability than other approaches. All methods showed satisfactory coverage except for nonparametric methods at the end of follow-up for a sample size of 1,000 especially. With nonproportional hazards, nonparametric methods yielded similar results to those under proportional hazards, whereas semiparametric and parametric approaches that both relied on the proportional hazards assumption performed poorly. These methods were applied to estimate the AR of breast cancer due to menopausal hormone therapy in 38,359 women of the E3N cohort. In practice, our study suggests to use the semiparametric or parametric approaches to estimate AR as a function of time in cohort studies if the proportional hazards assumption appears appropriate.

  20. Computational Modeling of Piezoelectric Foams

    NASA Astrophysics Data System (ADS)

    Challagulla, K. S.; Venkatesh, T. A.

    2013-02-01

    Piezoelectric materials, by virtue of their unique electromechanical characteristics, have been recognized for their potential utility in many applications as sensors and actuators. However, the sensing or actuating functionality of monolithic piezoelectric materials is generally limited. The composite approach to piezoelectric materials provides a unique opportunity to access a new design space with optimal mechanical and coupled characteristics. The properties of monolithic piezoelectric materials can be enhanced via the additive approach by adding two or more constituents to create several types of piezoelectric composites or via the subtractive approach by introducing controlled porosity in the matrix materials to create porous piezoelectric materials. Such porous piezoelectrics can be tailored to demonstrate improved signal-to-noise ratio, impedance matching, and sensitivity, and thus, they can be optimized for applications such as hydrophone devices. This article captures key results from the recent developments in the field of computational modeling of novel piezoelectric foam structures. It is demonstrated that the fundamental elastic, dielectric, and piezoelectric properties of piezoelectric foam are strongly dependent on the internal structure of the foams and the material volume fraction. The highest piezoelectric coupling constants and the highest acoustic impedance are obtained in the [3-3] interconnect-free piezoelectric foam structures, while the corresponding figures of merit for the [3-1] type long-porous structure are marginally higher. Among the [3-3] type foam structures, the sparsely-packed foam structures (with longer and thicker interconnects) display higher coupling constants and acoustic impedance as compared to closepacked foam structures (with shorter and thinner interconnects). The piezoelectric charge coefficients ( d h), the hydrostatic voltage coefficients ( g h), and the hydrostatic figures of merit ( d hgh) are observed to be significantly higher for the [3-3] type piezoelectric foam structures as compared to the [3-1] type long-porous materials, and these can be enhanced significantly by modifying the aspect ratio of the porosity in the foam structures as well.

  1. Evaluation of the constant potential method in simulating electric double-layer capacitors

    NASA Astrophysics Data System (ADS)

    Wang, Zhenxing; Yang, Yang; Olmsted, David L.; Asta, Mark; Laird, Brian B.

    2014-11-01

    A major challenge in the molecular simulation of electric double layer capacitors (EDLCs) is the choice of an appropriate model for the electrode. Typically, in such simulations the electrode surface is modeled using a uniform fixed charge on each of the electrode atoms, which ignores the electrode response to local charge fluctuations in the electrolyte solution. In this work, we evaluate and compare this Fixed Charge Method (FCM) with the more realistic Constant Potential Method (CPM), [S. K. Reed et al., J. Chem. Phys. 126, 084704 (2007)], in which the electrode charges fluctuate in order to maintain constant electric potential in each electrode. For this comparison, we utilize a simplified LiClO4-acetonitrile/graphite EDLC. At low potential difference (ΔΨ ⩽ 2 V), the two methods yield essentially identical results for ion and solvent density profiles; however, significant differences appear at higher ΔΨ. At ΔΨ ⩾ 4 V, the CPM ion density profiles show significant enhancement (over FCM) of "inner-sphere adsorbed" Li+ ions very close to the electrode surface. The ability of the CPM electrode to respond to local charge fluctuations in the electrolyte is seen to significantly lower the energy (and barrier) for the approach of Li+ ions to the electrode surface.

  2. A Model Study of Zonal Forcing in the Equatorial Stratosphere by Convectively Induced Gravity Waves

    NASA Technical Reports Server (NTRS)

    Alexander, M. J.; Holton, James R.

    1997-01-01

    A two-dimensional cloud-resolving model is used to examine the possible role of gravity waves generated by a simulated tropical squall line in forcing the quasi-biennial oscillation (QBO) of the zonal winds in the equatorial stratosphere. A simulation with constant background stratospheric winds is compared to simulations with background winds characteristic of the westerly and easterly QBO phases, respectively. In all three cases a broad spectrum of both eastward and westward propagating gravity waves is excited. In the constant background wind case the vertical momentum flux is nearly constant with height in the stratosphere, after correction for waves leaving the model domain. In the easterly and westerly shear cases, however, westward and eastward propagating waves, respectively, are strongly damped as they approach their critical levels, owing to the strongly scale-dependent vertical diffusion in the model. The profiles of zonal forcing induced by this wave damping are similar to profiles given by critical level absorption, but displaced slightly downward. The magnitude of the zonal forcing is of order 5 m/s/day. It is estimated that if 2% of the area of the Tropics were occupied by storms of similar magnitude, mesoscale gravity waves could provide nearly 1/4 of the zonal forcing required for the QBO.

  3. Electronic structure of the BaO molecule with dipole moments and ro-vibrational calculations

    NASA Astrophysics Data System (ADS)

    Khatib, Mohamed; Korek, Mahmoud

    2018-03-01

    The twenty-three low-lying electronic states (singlet and triplet) of the BaO molecule have been studied by using an ab initio method. These electronic states have been investigated by using the Complete Active Apace Self-Consistent Field (CASSCF) followed by multi-reference configuration interaction (MRCI + Q) with Davidson correction. The potential energy curves, the internuclear distance Re, the harmonic frequency ωe, the rotational constant Be, the electronic energy with respect to the ground state Te and the static and transition dipole moment have been investigated. The Einstein spontaneous and induced emission coefficients A21 and B21ω as well as the spontaneous radiative lifetime τspon, emission wavelength λ21 and oscillator strength f21 have been calculated by using the transition dipole moment between some doublet electronic states. The calculation of the eigenvalues Ev, the rotational constant Bv, the centrifugal distortion constant Dv, and the abscissas of the turning points Rmin and Rmax have been done by using the canonical functions approach. A very good agreement is shown by comparing the values of our work to those found in the literature for many electronic states. Eighteen new electronic states have been studied here for the first time.

  4. A modified Poisson-Boltzmann equation applied to protein adsorption.

    PubMed

    Gama, Marlon de Souza; Santos, Mirella Simões; Lima, Eduardo Rocha de Almeida; Tavares, Frederico Wanderley; Barreto, Amaro Gomes Barreto

    2018-01-05

    Ion-exchange chromatography has been widely used as a standard process in purification and analysis of protein, based on the electrostatic interaction between the protein and the stationary phase. Through the years, several approaches are used to improve the thermodynamic description of colloidal particle-surface interaction systems, however there are still a lot of gaps specifically when describing the behavior of protein adsorption. Here, we present an improved methodology for predicting the adsorption equilibrium constant by solving the modified Poisson-Boltzmann (PB) equation in bispherical coordinates. By including dispersion interactions between ions and protein, and between ions and surface, the modified PB equation used can describe the Hofmeister effects. We solve the modified Poisson-Boltzmann equation to calculate the protein-surface potential of mean force, treated as spherical colloid-plate system, as a function of process variables. From the potential of mean force, the Henry constants of adsorption, for different proteins and surfaces, are calculated as a function of pH, salt concentration, salt type, and temperature. The obtained Henry constants are compared with experimental data for several isotherms showing excellent agreement. We have also performed a sensitivity analysis to verify the behavior of different kind of salts and the Hofmeister effects. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  6. Strangeness S =-1 hyperon-nucleon scattering in covariant chiral effective field theory

    NASA Astrophysics Data System (ADS)

    Li, Kai-Wen; Ren, Xiu-Lei; Geng, Li-Sheng; Long, Bingwei

    2016-07-01

    Motivated by the successes of covariant baryon chiral perturbation theory in one-baryon systems and in heavy-light systems, we study relevance of relativistic effects in hyperon-nucleon interactions with strangeness S =-1 . In this exploratory work, we follow the covariant framework developed by Epelbaum and Gegelia to calculate the Y N scattering amplitude at leading order. By fitting the five low-energy constants to the experimental data, we find that the cutoff dependence is mitigated, compared with the heavy-baryon approach. Nevertheless, the description of the experimental data remains quantitatively similar at leading order.

  7. Sauter-Schwinger pair creation dynamically assisted by a plane wave

    NASA Astrophysics Data System (ADS)

    Torgrimsson, Greger; Schneider, Christian; Schützhold, Ralf

    2018-05-01

    We study electron-positron pair creation by a strong and constant electric field superimposed with a weaker transversal plane wave which is incident perpendicularly (or under some angle). Comparing the fully nonperturbative approach based on the world-line instanton method with a perturbative expansion into powers of the strength of the weaker plane wave, we find good agreement—provided that the latter is carried out to sufficiently high orders. As usual for the dynamically assisted Sauter-Schwinger effect, the additional plane wave induces an exponential enhancement of the pair-creation probability if the combined Keldysh parameter exceeds a certain threshold.

  8. Novel MSVPWM to reduce the inductor current ripple for Z-source inverter in electric vehicle applications.

    PubMed

    Zhang, Qianfan; Dong, Shuai; Xue, Ping; Zhou, Chaowei; Cheng, ShuKang

    2014-01-01

    A novel modified space vector pulse width modulation (MSVPWM) strategy for Z-Source inverter is presented. By rearranging the position of shoot-through states, the frequency of inductor current ripple is kept constant. Compared with existing MSVPWM strategies, the proposed approach can reduce the maximum inductor current ripple. So the volume of Z-source network inductor can be designed smaller, which brings the beneficial effect on the miniaturization of the electric vehicle controller. Theoretical findings in the novel MSVPWM for Z-Source inverter have been verified by experiment results.

  9. Novel MSVPWM to Reduce the Inductor Current Ripple for Z-Source Inverter in Electric Vehicle Applications

    PubMed Central

    Zhang, Qianfan; Dong, Shuai; Xue, Ping; Zhou, Chaowei; Cheng, ShuKang

    2014-01-01

    A novel modified space vector pulse width modulation (MSVPWM) strategy for Z-Source inverter is presented. By rearranging the position of shoot-through states, the frequency of inductor current ripple is kept constant. Compared with existing MSVPWM strategies, the proposed approach can reduce the maximum inductor current ripple. So the volume of Z-source network inductor can be designed smaller, which brings the beneficial effect on the miniaturization of the electric vehicle controller. Theoretical findings in the novel MSVPWM for Z-Source inverter have been verified by experiment results. PMID:24883412

  10. Investigation of two- and three-bond carbon-hydrogen coupling constants in cinnamic acid based compounds.

    PubMed

    Pierens, Gregory K; Venkatachalam, Taracad K; Reutens, David C

    2016-12-01

    Two- and three-bond coupling constants ( 2 J HC and 3 J HC ) were determined for a series of 12 substituted cinnamic acids using a selective 2D inphase/antiphase (IPAP)-single quantum multiple bond correlation (HSQMBC) and 1D proton coupled 13 C NMR experiments. The coupling constants from two methods were compared and found to give very similar values. The results showed coupling constant values ranging from 1.7 to 9.7 Hz and 1.0 to 9.6 Hz for the IPAP-HSQMBC and the direct 13 C NMR experiments, respectively. The experimental values of the coupling constants were compared with discrete density functional theory (DFT) calculated values and were found to be in good agreement for the 3 J HC . However, the DFT method under estimated the 2 J HC coupling constants. Knowing the limitations of the measurement and calculation of these multibond coupling constants will add confidence to the assignment of conformation or stereochemical aspects of complex molecules like natural products. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Instanton rate constant calculations close to and above the crossover temperature.

    PubMed

    McConnell, Sean; Kästner, Johannes

    2017-11-15

    Canonical instanton theory is known to overestimate the rate constant close to a system-dependent crossover temperature and is inapplicable above that temperature. We compare the accuracy of the reaction rate constants calculated using recent semi-classical rate expressions to those from canonical instanton theory. We show that rate constants calculated purely from solving the stability matrix for the action in degrees of freedom orthogonal to the instanton path is not applicable at arbitrarily low temperatures and use two methods to overcome this. Furthermore, as a by-product of the developed methods, we derive a simple correction to canonical instanton theory that can alleviate this known overestimation of rate constants close to the crossover temperature. The combined methods accurately reproduce the rate constants of the canonical theory along the whole temperature range without the spurious overestimation near the crossover temperature. We calculate and compare rate constants on three different reactions: H in the Müller-Brown potential, methylhydroxycarbene → acetaldehyde and H 2  + OH → H + H 2 O. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Data Analysis and Its Impact on Predicting Schedule & Cost Risk

    DTIC Science & Technology

    2006-03-01

    variance of the error term by performing a Breusch - Pagan test for constant variance (Neter et al., 1996:239). In order to test the normality of...is constant variance. Using Microsoft Excel®, we calculate a p- 68 value of 0.225678 for the Breusch - Pagan test . We again compare this p-value to...calculate a p-value of 0.121211092 Breusch - Pagan test . We again compare this p-value to an alpha of 0.05 indicating our assumption of constant variance

  13. 3j Symbols: To Normalize or Not to Normalize?

    ERIC Educational Resources Information Center

    van Veenendaal, Michel

    2011-01-01

    The systematic use of alternative normalization constants for 3j symbols can lead to a more natural expression of quantities, such as vector products and spherical tensor operators. The redefined coupling constants directly equate tensor products to the inner and outer products without any additional square roots. The approach is extended to…

  14. Efficient Computation of Anharmonic Force Constants via q-space, with Application to Graphene

    NASA Astrophysics Data System (ADS)

    Kornbluth, Mordechai; Marianetti, Chris

    We present a new approach for extracting anharmonic force constants from a sparse sampling of the anharmonic dynamical tensor. We calculate the derivative of the energy with respect to q-space displacements (phonons) and strain, which guarantees the absence of supercell image errors. Central finite differences provide a well-converged quadratic error tail for each derivative, separating the contribution of each anharmonic order. These derivatives populate the anharmonic dynamical tensor in a sparse mesh that bounds the Brillouin Zone, which ensures comprehensive sampling of q-space while exploiting small-cell calculations for efficient, high-throughput computation. This produces a well-converged and precisely-defined dataset, suitable for big-data approaches. We transform this sparsely-sampled anharmonic dynamical tensor to real-space anharmonic force constants that obey full space-group symmetries by construction. Machine-learning techniques identify the range of real-space interactions. We show the entire process executed for graphene, up to and including the fifth-order anharmonic force constants. This method successfully calculates strain-based phonon renormalization in graphene, even under large strains, which solves a major shortcoming of previous potentials.

  15. Quasilinear models through the lens of resolvent analysis

    NASA Astrophysics Data System (ADS)

    McKeon, Beverley; Chini, Greg

    2017-11-01

    Quasilinear (QL) and generalized quasilinear (GQL) analyses, e.g. Marston et al., also variously described as statistical state dynamics models, e.g., Farrell et al., restricted nonlinear models, e.g. Thomas et al., or 2D/3C models, e.g. Gayme et al., have achieved considerable success in recovering the mean velocity profile for a range of turbulent flows. In QL approaches, the portion of the velocity field that can be represented as streamwise constant, i.e. with streamwise wavenumber kx = 0 , is fully resolved, while the streamwise-varying dynamics are linearized about the streamwise-constant field; that is, only those nonlinear interactions that drive the streamwise-constant field are retained, and the non-streamwise constant ``fluctuation-fluctuation'' interactions are ignored. Here, we show how these QL approaches can be reformulated in terms of the closed-loop resolvent analysis of McKeon & Sharma (2010), which enables us to identify reasons for their evident success as well as algorithms for their efficient computation. The support of ONR through Grant No. N00014-17-2307 is gratefully acknowledged.

  16. Impact of uncertainties in inorganic chemical rate constants on tropospheric composition and ozone radiative forcing

    NASA Astrophysics Data System (ADS)

    Newsome, Ben; Evans, Mat

    2017-12-01

    Chemical rate constants determine the composition of the atmosphere and how this composition has changed over time. They are central to our understanding of climate change and air quality degradation. Atmospheric chemistry models, whether online or offline, box, regional or global, use these rate constants. Expert panels evaluate laboratory measurements, making recommendations for the rate constants that should be used. This results in very similar or identical rate constants being used by all models. The inherent uncertainties in these recommendations are, in general, therefore ignored. We explore the impact of these uncertainties on the composition of the troposphere using the GEOS-Chem chemistry transport model. Based on the Jet Propulsion Laboratory (JPL) and International Union of Pure and Applied Chemistry (IUPAC) evaluations we assess the influence of 50 mainly inorganic rate constants and 10 photolysis rates on tropospheric composition through the use of the GEOS-Chem chemistry transport model. We assess the impact on four standard metrics: annual mean tropospheric ozone burden, surface ozone and tropospheric OH concentrations, and tropospheric methane lifetime. Uncertainty in the rate constants for NO2 + OH M HNO3 and O3 + NO → NO2 + O2 are the two largest sources of uncertainty in these metrics. The absolute magnitude of the change in the metrics is similar if rate constants are increased or decreased by their σ values. We investigate two methods of assessing these uncertainties, addition in quadrature and a Monte Carlo approach, and conclude they give similar outcomes. Combining the uncertainties across the 60 reactions gives overall uncertainties on the annual mean tropospheric ozone burden, surface ozone and tropospheric OH concentrations, and tropospheric methane lifetime of 10, 11, 16 and 16 %, respectively. These are larger than the spread between models in recent model intercomparisons. Remote regions such as the tropics, poles and upper troposphere are most uncertain. This chemical uncertainty is sufficiently large to suggest that rate constant uncertainty should be considered alongside other processes when model results disagree with measurement. Calculations for the pre-industrial simulation allow a tropospheric ozone radiative forcing to be calculated of 0.412 ± 0.062 W m-2. This uncertainty (13 %) is comparable to the inter-model spread in ozone radiative forcing found in previous model-model intercomparison studies where the rate constants used in the models are all identical or very similar. Thus, the uncertainty of tropospheric ozone radiative forcing should expanded to include this additional source of uncertainty. These rate constant uncertainties are significant and suggest that refinement of supposedly well-known chemical rate constants should be considered alongside other improvements to enhance our understanding of atmospheric processes.

  17. How to derive biological information from the value of the normalization constant in allometric equations.

    PubMed

    Kaitaniemi, Pekka

    2008-04-09

    Allometric equations are widely used in many branches of biological science. The potential information content of the normalization constant b in allometric equations of the form Y = bX(a) has, however, remained largely neglected. To demonstrate the potential for utilizing this information, I generated a large number of artificial datasets that resembled those that are frequently encountered in biological studies, i.e., relatively small samples including measurement error or uncontrolled variation. The value of X was allowed to vary randomly within the limits describing different data ranges, and a was set to a fixed theoretical value. The constant b was set to a range of values describing the effect of a continuous environmental variable. In addition, a normally distributed random error was added to the values of both X and Y. Two different approaches were then used to model the data. The traditional approach estimated both a and b using a regression model, whereas an alternative approach set the exponent a at its theoretical value and only estimated the value of b. Both approaches produced virtually the same model fit with less than 0.3% difference in the coefficient of determination. Only the alternative approach was able to precisely reproduce the effect of the environmental variable, which was largely lost among noise variation when using the traditional approach. The results show how the value of b can be used as a source of valuable biological information if an appropriate regression model is selected.

  18. Inflation with a constant rate of roll

    NASA Astrophysics Data System (ADS)

    Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi

    2015-09-01

    We consider an inflationary scenario where the rate of inflaton roll defined by ̈phi/H dot phi remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs for unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime.

  19. Finite-Temperature Behavior of PdH x Elastic Constants Computed by Direct Molecular Dynamics

    DOE PAGES

    Zhou, X. W.; Heo, T. W.; Wood, B. C.; ...

    2017-05-30

    In this paper, robust time-averaged molecular dynamics has been developed to calculate finite-temperature elastic constants of a single crystal. We find that when the averaging time exceeds a certain threshold, the statistical errors in the calculated elastic constants become very small. We applied this method to compare the elastic constants of Pd and PdH 0.6 at representative low (10 K) and high (500 K) temperatures. The values predicted for Pd match reasonably well with ultrasonic experimental data at both temperatures. In contrast, the predicted elastic constants for PdH 0.6 only match well with ultrasonic data at 10 K; whereas, atmore » 500 K, the predicted values are significantly lower. We hypothesize that at 500 K, the facile hydrogen diffusion in PdH 0.6 alters the speed of sound, resulting in significantly reduced values of predicted elastic constants as compared to the ultrasonic experimental data. Finally, literature mechanical testing experiments seem to support this hypothesis.« less

  20. Frustration in protein elastic network models

    NASA Astrophysics Data System (ADS)

    Lezon, Timothy; Bahar, Ivet

    2010-03-01

    Elastic network models (ENMs) are widely used for studying the equilibrium dynamics of proteins. The most common approach in ENM analysis is to adopt a uniform force constant or a non-specific distance dependent function to represent the force constant strength. Here we discuss the influence of sequence and structure in determining the effective force constants between residues in ENMs. Using a novel method based on entropy maximization, we optimize the force constants such that they exactly reporduce a subset of experimentally determined pair covariances for a set of proteins. We analyze the optimized force constants in terms of amino acid types, distances, contact order and secondary structure, and we demonstrate that including frustrated interactions in the ENM is essential for accurately reproducing the global modes in the middle of the frequency spectrum.

  1. Species-Specific Thiol-Disulfide Equilibrium Constant: A Tool To Characterize Redox Transitions of Biological Importance.

    PubMed

    Mirzahosseini, Arash; Somlyay, Máté; Noszál, Béla

    2015-08-13

    Microscopic redox equilibrium constants, a new species-specific type of physicochemical parameters, were introduced and determined to quantify thiol-disulfide equilibria of biological significance. The thiol-disulfide redox equilibria of glutathione with cysteamine, cysteine, and homocysteine were approached from both sides, and the equilibrium mixtures were analyzed by quantitative NMR methods to characterize the highly composite, co-dependent acid-base and redox equilibria. The directly obtained, pH-dependent, conditional constants were then decomposed by a new evaluation method, resulting in pH-independent, microscopic redox equilibrium constants for the first time. The 80 different, microscopic redox equilibrium constant values show close correlation with the respective thiolate basicities and provide sound means for the development of potent agents against oxidative stress.

  2. Inflation with a smooth constant-roll to constant-roll era transition

    NASA Astrophysics Data System (ADS)

    Odintsov, S. D.; Oikonomou, V. K.

    2017-07-01

    In this paper, we study canonical scalar field models, with a varying second slow-roll parameter, that allow transitions between constant-roll eras. In the models with two constant-roll eras, it is possible to avoid fine-tunings in the initial conditions of the scalar field. We mainly focus on the stability of the resulting solutions, and we also investigate if these solutions are attractors of the cosmological system. We shall calculate the resulting scalar potential and, by using a numerical approach, we examine the stability and attractor properties of the solutions. As we show, the first constant-roll era is dynamically unstable towards linear perturbations, and the cosmological system is driven by the attractor solution to the final constant-roll era. As we demonstrate, it is possible to have a nearly scale-invariant power spectrum of primordial curvature perturbations in some cases; however, this is strongly model dependent and depends on the rate of the final constant-roll era. Finally, we present, in brief, the essential features of a model that allows oscillations between constant-roll eras.

  3. Towards a Viscous Wall Model for Immersed Boundary Methods

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Immersed boundary methods are frequently employed for simulating flows at low Reynolds numbers or for applications where viscous boundary layer effects can be neglected. The primary shortcoming of Cartesian mesh immersed boundary methods is the inability of efficiently resolving thin turbulent boundary layers in high-Reynolds number flow application. The inefficiency of resolving the thin boundary is associated with the use of constant aspect ratio Cartesian grid cells. Conventional CFD approaches can efficiently resolve the large wall normal gradients by utilizing large aspect ratio cells near the wall. This paper presents different approaches for immersed boundary methods to account for the viscous boundary layer interaction with the flow-field away from the walls. Different wall modeling approaches proposed in previous research studies are addressed and compared to a new integral boundary layer based approach. In contrast to common wall-modeling approaches that usually only utilize local flow information, the integral boundary layer based approach keeps the streamwise history of the boundary layer. This allows the method to remain effective at much larger y+ values than local wall modeling approaches. After a theoretical discussion of the different approaches, the method is applied to increasingly more challenging flow fields including fully attached, separated, and shock-induced separated (laminar and turbulent) flows.

  4. Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Köcher, S. S.; Institute of Energy and Climate Research; Heydenreich, T.

    Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoreticallymore » predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.« less

  5. Comparing otoacoustic emissions evoked by chirp transients with constant absorbed sound power and constant incident pressure magnitude.

    PubMed

    Keefe, Douglas H; Feeney, M Patrick; Hunter, Lisa L; Fitzpatrick, Denis F

    2017-01-01

    Human ear-canal properties of transient acoustic stimuli are contrasted that utilize measured ear-canal pressures in conjunction with measured acoustic pressure reflectance and admittance. These data are referenced to the tip of a probe snugly inserted into the ear canal. Promising procedures to calibrate across frequency include stimuli with controlled levels of incident pressure magnitude, absorbed sound power, and forward pressure magnitude. An equivalent pressure at the eardrum is calculated from these measured data using a transmission-line model of ear-canal acoustics parameterized by acoustically estimated ear-canal area at the probe tip and length between the probe tip and eardrum. Chirp stimuli with constant incident pressure magnitude and constant absorbed sound power across frequency were generated to elicit transient-evoked otoacoustic emissions (TEOAEs), which were measured in normal-hearing adult ears from 0.7 to 8 kHz. TEOAE stimuli had similar peak-to-peak equivalent sound pressure levels across calibration conditions. Frequency-domain TEOAEs were compared using signal level, signal-to-noise ratio (SNR), coherence synchrony modulus (CSM), group delay, and group spread. Time-domain TEOAEs were compared using SNR, CSM, instantaneous frequency and instantaneous bandwidth. Stimuli with constant incident pressure magnitude or constant absorbed sound power across frequency produce generally similar TEOAEs up to 8 kHz.

  6. Constant-roll tachyon inflation and observational constraints

    NASA Astrophysics Data System (ADS)

    Gao, Qing; Gong, Yungui; Fei, Qin

    2018-05-01

    For the constant-roll tachyon inflation, we derive the analytical expressions for the scalar and tensor power spectra, the scalar and tensor spectral tilts and the tensor to scalar ratio to the first order of epsilon1 by using the method of Bessel function approximation. The derived ns-r results are compared with the observations, we find that only the constant-roll inflation with ηH being a constant is consistent with the observations and observations constrain the constant-roll inflation to be slow-roll inflation. The tachyon potential is also reconstructed for the constant-roll inflation which is consistent with the observations.

  7. Colossal dielectric and electromechanical responses in self-assembled polymeric nanocomposites

    NASA Astrophysics Data System (ADS)

    Huang, Cheng; Zhang, Q. M.; Li, Jiang Yu; Rabeony, Manese

    2005-10-01

    An electroactive polymer nanocomposite, in which high dielectric constant copper phthalocyanine oligomer (o-CuPc) nanoparticles are incorporated into the block polyurethane (PU) matrix by the combination of "top down" and "bottom up" approaches, was realized. Such an approach enables the nanocomposite to exhibit colossal dielectric and electromechanical responses with very low volume fraction of the high dielectric constant o-CuPc nanofillers (˜3.5%) in the composite. In contrast, a simple blend of o-CuPc and PU composite with much higher o-CuPc content (˜16% of o-CuPc) shows much lower dielectric and electromechanical responses.

  8. A computational method for the Helmholtz equation in unbounded domains based on the minimization of an integral functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciraolo, Giulio, E-mail: g.ciraolo@math.unipa.it; Gargano, Francesco, E-mail: gargano@math.unipa.it; Sciacca, Vincenzo, E-mail: sciacca@math.unipa.it

    2013-08-01

    We study a new approach to the problem of transparent boundary conditions for the Helmholtz equation in unbounded domains. Our approach is based on the minimization of an integral functional arising from a volume integral formulation of the radiation condition. The index of refraction does not need to be constant at infinity and may have some angular dependency as well as perturbations. We prove analytical results on the convergence of the approximate solution. Numerical examples for different shapes of the artificial boundary and for non-constant indexes of refraction will be presented.

  9. Close encounters and collisions of comets with the earth

    NASA Technical Reports Server (NTRS)

    Sekanina, Z.; Yeomans, D. K.

    1984-01-01

    A computer search for earth-approaching comets among those listed in Marsden's (1983) updated orbit catalog has identified 36 cases at which minimum separation distance was less than 2500 earth radii. A strong representation of short period comets in the sample is noted, and the constant rate of the close approaching comets in the last 300 years is interpreted to suggest the lack of long-period comets intrinsically fainter than an absolute magnitude of about 11. A comet-earth collision rate derived from the statistics of these close encounters implies an average period of 33-64 million years between any two events. This rate is comparable with the frequency of geologically recent global catastrophes which appear to be associated with extraterrestrial object impacts, such as the Cretaceous-Tertiary extinction 65 million years ago and the late Eocene event 34 million years ago.

  10. Automated Enrichment of Sulfanilamide in Milk Matrices by Utilization of Aptamer-Linked Magnetic Particles.

    PubMed

    Fischer, Christin; Kallinich, Constanze; Klockmann, Sven; Schrader, Jil; Fischer, Markus

    2016-12-07

    The present work demonstrates the first automated enrichment approach for antibiotics in milk using specific DNA aptamers. First, aptamers toward the antibiotic sulfanilamide were selected and characterized regarding their dissociation constants and specificity toward relevant antibiotics via fluorescence assay and LC-MS/MS detection. The performed enrichment was automated using the KingFisherDuo and compared to a manual approach. Verifying the functionality, trapping was realized in different milk matrices: (i) 0.3% fat milk, (ii) 1.5% fat milk, (iii) 3.5% fat milk, and (iv) 0.3% fat cocoa milk drink. Enrichment factors up to 8-fold could be achieved. Furthermore, it could be shown that novel implementation of a magnetic separator increases the reproducibility and reduces the hands-on time from approximately half a day to 30 min.

  11. Comparative study of elastic constantd of α-, β- and Cubic- silicon nitride

    NASA Astrophysics Data System (ADS)

    Yao, Hongzhi; Ouyang, Lizhi; Ching, Wai-Yim

    2003-03-01

    Silicon nitride is an important structural ceramic and dielectric insulator. Recently, the new high pressure cubic phase of silicon nitride in spinel structure has attracted a lot of attention.^[1] We have carried out a detailed ab-initio calculation of all independent elastic constants for all three phases of Si_3N4 by using the Vienna Ab-initio Simulation Package (VASP) in both LDA and GGA approxmations. The results for β-Si_3N4 are in reasonable agreement with a experimental measurement on single crystal samples.^[2] For cubic-Si_3N4 , The three independent elastic constants are predicted to be C_11 = 504.16 GPa, C_12 = 176.66 GPa, C_44 = 326.65 GPa and a bulk modulus B = 286 GPa. This value is very close to the experimental value of 300 GPa.^[1] All these results will be compared with those obtained by using the OLCAO method based on localized orbital approach.^[3] [1]. Wai-Yim Ching, Yong-Nian Xu, Jukian D. Gale, and Manfred Ruhle, J. Am. Ceram. Soc. 81, 3189 (1998) [2]. R. Vogelgesang, M. Grimsditch, and J. S. Wallace, Appl. Phys. Lett. 76, 8 (2000) [3]. W.Y.Ching, Lizhi Ouyang, and Julian D. Gale, Phys. Rev. B61, 13, (2000)

  12. Transport and dielectric properties of water and the influence of coarse-graining: Comparing BMW, SPC/E, and TIP3P models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, Daniel; Boresch, Stefan; Steinhauser, Othmar

    Long-term molecular dynamics simulations are used to compare the single particle dipole reorientation time, the diffusion constant, the viscosity, and the frequency-dependent dielectric constant of the coarse-grained big multipole water (BMW) model to two common atomistic three-point water models, SPC/E and TIP3P. In particular, the agreement between the calculated viscosity of BMW and the experimental viscosity of water is satisfactory. We also discuss contradictory values for the static dielectric properties reported in the literature. Employing molecular hydrodynamics, we show that the viscosity can be computed from single particle dynamics, circumventing the slow convergence of the standard approaches. Furthermore, our datamore » indicate that the Kivelson relation connecting single particle and collective reorientation time holds true for all systems investigated. Since simulations with coarse-grained force fields often employ extremely large time steps, we also investigate the influence of time step on dynamical properties. We observe a systematic acceleration of system dynamics when increasing the time step. Carefully monitoring energy/temperature conservation is found to be a sufficient criterion for the reliable calculation of dynamical properties. By contrast, recommended criteria based on the ratio of fluctuations of total vs. kinetic energy are not sensitive enough.« less

  13. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    PubMed Central

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  14. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-06-10

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected.

  15. Finite element method modeling to assess Laplacian estimates via novel variable inter-ring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-08-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation has been demonstrated in a range of applications. In our recent work we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts using finite element method modeling. Obtained results suggest that increasing inter-ring distances electrode configurations may decrease the estimation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration the estimation error may be decreased more than two-fold while for the quadripolar configuration more than six-fold decrease is expected.

  16. Analytic assessment of Laplacian estimates via novel variable interring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-08-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation has been demonstrated in a range of applications. In our recent work we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are analytically compared to their constant inter-ring distances counterparts using coefficients of the Taylor series truncation terms. Obtained results suggest that increasing inter-ring distances electrode configurations may decrease the truncation error of the Laplacian estimation resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration the truncation error may be decreased more than two-fold while for the quadripolar more than seven-fold decrease is expected.

  17. A method of inferring collision ratio based on maneuverability of own ship under critical collision conditions

    NASA Astrophysics Data System (ADS)

    You, Youngjun; Rhee, Key-Pyo; Ahn, Kyoungsoo

    2013-06-01

    In constructing a collision avoidance system, it is important to determine the time for starting collision avoidance maneuver. Many researchers have attempted to formulate various indices by applying a range of techniques. Among these indices, collision risk obtained by combining Distance to the Closest Point of Approach (DCPA) and Time to the Closest Point of Approach (TCPA) information with fuzzy theory is mostly used. However, the collision risk has a limit, in that membership functions of DCPA and TCPA are empirically determined. In addition, the collision risk is not able to consider several critical collision conditions where the target ship fails to take appropriate actions. It is therefore necessary to design a new concept based on logical approaches. In this paper, a collision ratio is proposed, which is the expected ratio of unavoidable paths to total paths under suitably characterized operation conditions. Total paths are determined by considering categories such as action space and methodology of avoidance. The International Regulations for Preventing Collisions at Sea (1972) and collision avoidance rules (2001) are considered to solve the slower ship's dilemma. Different methods which are based on a constant speed model and simulated speed model are used to calculate the relative positions between own ship and target ship. In the simulated speed model, fuzzy control is applied to determination of command rudder angle. At various encounter situations, the time histories of the collision ratio based on the simulated speed model are compared with those based on the constant speed model.

  18. Ab Initio Effective Rovibrational Hamiltonians for Non-Rigid Molecules via Curvilinear VMP2

    NASA Astrophysics Data System (ADS)

    Changala, Bryan; Baraban, Joshua H.

    2017-06-01

    Accurate predictions of spectroscopic constants for non-rigid molecules are particularly challenging for ab initio theory. For all but the smallest systems, ``brute force'' diagonalization of the full rovibrational Hamiltonian is computationally prohibitive, leaving us at the mercy of perturbative approaches. However, standard perturbative techniques, such as second order vibrational perturbation theory (VPT2), are based on the approximation that a molecule makes small amplitude vibrations about a well defined equilibrium structure. Such assumptions are physically inappropriate for non-rigid systems. In this talk, we will describe extensions to curvilinear vibrational Møller-Plesset perturbation theory (VMP2) that account for rotational and rovibrational effects in the molecular Hamiltonian. Through several examples, we will show that this approach provides predictions to nearly microwave accuracy of molecular constants including rotational and centrifugal distortion parameters, Coriolis coupling constants, and anharmonic vibrational and tunneling frequencies.

  19. An extended data mining method for identifying differentially expressed assay-specific signatures in functional genomic studies.

    PubMed

    Rollins, Derrick K; Teh, Ailing

    2010-12-17

    Microarray data sets provide relative expression levels for thousands of genes for a small number, in comparison, of different experimental conditions called assays. Data mining techniques are used to extract specific information of genes as they relate to the assays. The multivariate statistical technique of principal component analysis (PCA) has proven useful in providing effective data mining methods. This article extends the PCA approach of Rollins et al. to the development of ranking genes of microarray data sets that express most differently between two biologically different grouping of assays. This method is evaluated on real and simulated data and compared to a current approach on the basis of false discovery rate (FDR) and statistical power (SP) which is the ability to correctly identify important genes. This work developed and evaluated two new test statistics based on PCA and compared them to a popular method that is not PCA based. Both test statistics were found to be effective as evaluated in three case studies: (i) exposing E. coli cells to two different ethanol levels; (ii) application of myostatin to two groups of mice; and (iii) a simulated data study derived from the properties of (ii). The proposed method (PM) effectively identified critical genes in these studies based on comparison with the current method (CM). The simulation study supports higher identification accuracy for PM over CM for both proposed test statistics when the gene variance is constant and for one of the test statistics when the gene variance is non-constant. PM compares quite favorably to CM in terms of lower FDR and much higher SP. Thus, PM can be quite effective in producing accurate signatures from large microarray data sets for differential expression between assays groups identified in a preliminary step of the PCA procedure and is, therefore, recommended for use in these applications.

  20. Morphometric analyses of hominoid crania, probabilities of conspecificity and an approximation of a biological species constant.

    PubMed

    Thackeray, J F; Dykes, S

    2016-02-01

    Thackeray has previously explored the possibility of using a morphometric approach to quantify the "amount" of variation within species and to assess probabilities of conspecificity when two fossil specimens are compared, instead of "pigeon-holing" them into discrete species. In an attempt to obtain a statistical (probabilistic) definition of a species, Thackeray has recognized an approximation of a biological species constant (T=-1.61) based on the log-transformed standard error of the coefficient m (log sem) in regression analysis of cranial and other data from pairs of specimens of conspecific extant species, associated with regression equations of the form y=mx+c where m is the slope and c is the intercept, using measurements of any specimen A (x axis), and any specimen B of the same species (y axis). The log-transformed standard error of the co-efficient m (log sem) is a measure of the degree of similarity between pairs of specimens, and in this study shows central tendency around a mean value of -1.61 and standard deviation 0.10 for modern conspecific specimens. In this paper we focus attention on the need to take into account the range of difference in log sem values (Δlog sem or "delta log sem") obtained from comparisons when specimen A (x axis) is compared to B (y axis), and secondly when specimen A (y axis) is compared to B (x axis). Thackeray's approach can be refined to focus on high probabilities of conspecificity for pairs of specimens for which log sem is less than -1.61 and for which Δlog sem is less than 0.03. We appeal for the adoption of a concept here called "sigma taxonomy" (as opposed to "alpha taxonomy"), recognizing that boundaries between species are not always well defined. Copyright © 2015 Elsevier GmbH. All rights reserved.

  1. Parameter estimation in tree graph metabolic networks.

    PubMed

    Astola, Laura; Stigter, Hans; Gomez Roldan, Maria Victoria; van Eeuwijk, Fred; Hall, Robert D; Groenenboom, Marian; Molenaar, Jaap J

    2016-01-01

    We study the glycosylation processes that convert initially toxic substrates to nutritionally valuable metabolites in the flavonoid biosynthesis pathway of tomato (Solanum lycopersicum) seedlings. To estimate the reaction rates we use ordinary differential equations (ODEs) to model the enzyme kinetics. A popular choice is to use a system of linear ODEs with constant kinetic rates or to use Michaelis-Menten kinetics. In reality, the catalytic rates, which are affected among other factors by kinetic constants and enzyme concentrations, are changing in time and with the approaches just mentioned, this phenomenon cannot be described. Another problem is that, in general these kinetic coefficients are not always identifiable. A third problem is that, it is not precisely known which enzymes are catalyzing the observed glycosylation processes. With several hundred potential gene candidates, experimental validation using purified target proteins is expensive and time consuming. We aim at reducing this task via mathematical modeling to allow for the pre-selection of most potential gene candidates. In this article we discuss a fast and relatively simple approach to estimate time varying kinetic rates, with three favorable properties: firstly, it allows for identifiable estimation of time dependent parameters in networks with a tree-like structure. Secondly, it is relatively fast compared to usually applied methods that estimate the model derivatives together with the network parameters. Thirdly, by combining the metabolite concentration data with a corresponding microarray data, it can help in detecting the genes related to the enzymatic processes. By comparing the estimated time dynamics of the catalytic rates with time series gene expression data we may assess potential candidate genes behind enzymatic reactions. As an example, we show how to apply this method to select prominent glycosyltransferase genes in tomato seedlings.

  2. Adapting Local Features for Face Detection in Thermal Image.

    PubMed

    Ma, Chao; Trung, Ngo Thanh; Uchiyama, Hideaki; Nagahara, Hajime; Shimada, Atsushi; Taniguchi, Rin-Ichiro

    2017-11-27

    A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses). We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  3. Interannual variability of crop water footprint

    NASA Astrophysics Data System (ADS)

    Tuninetti, M.; Tamea, S.; Laio, F.; Ridolfi, L.

    2016-12-01

    The crop water footprint, CWF, is a useful tool to investigate the water-food nexus, since it measures the water requirement for crop production. Heterogeneous spatial patterns of climatic conditions and agricultural practices have inspired a flourishing literature on the geographic assessment of CWF, mostly referred to a fixed (time-averaged) period. However, given that both climatic conditions and crop yield may vary substantially over time, also the CWF temporal dynamics need to be addressed. As other studies have done, we base the CWF variability on yield, while keeping the crop evapotranspiration constant over time. As a new contribution, we prove the feasibility of this approach by comparing these CWF estimates with the results obtained with a full model considering variations of crop evapotranspiration: overall, the estimates compare well showing high coefficients of determination that read 0.98 for wheat, 0.97 for rice, 0.97 for maize, and 0.91 for soybean. From this comparison, we derive also the precision of the method, which is around ±10% that is higher than the precision of the model used to evaluate the crop evapotranspiration (i.e., ±30%). Over the period between 1961 and 2013, the CWF of the most cultivated grains has sharply decreased on a global basis (i.e., -68% for wheat, -62% for rice, -66% for maize, and -52% for soybean), mainly driven by enhanced yield values. The higher water use efficiency in crop production implies a reduced virtual displacement of embedded water per ton of traded crop and as a result, the temporal variability of virtual water trade is different if considering constant or time-varying CWF. The proposed yield-based approach to estimate the CWF variability implies low computational costs and requires limited input data, thus, it represents a promising tool for time-dependent water footprint assessments.

  4. Effect of confinement on anharmonic phonon scattering and thermal conductivity in pristine silicon nanowires

    NASA Astrophysics Data System (ADS)

    Rashid, Zahid; Zhu, Liyan; Li, Wu

    2018-02-01

    The effect of confinement on the anharmonic phonon scattering rates and the consequences thereof on the thermal transport properties in ultrathin silicon nanowires with a diameter of 1-4 nm have been characterized using atomistic simulations and the phonon Boltzmann transport equation. The phonon density of states (PDOS) for ultrathin nanowires approaches a constant value in the vicinity of the Γ point and increases with decreasing diameter, which indicates the increasing importance of the low-frequency phonons as heat carriers. The anharmonic phonon scattering becomes dramatically enhanced with decreasing thickness of the nanowires. In the thinnest nanowire, the scattering rates for phonons above 1 THz are one order of magnitude higher than those in the bulk Si. Below 1 THz, the increase in scattering rates is even much more appreciable. Our numerical calculations revealed that the scattering rates for transverse (longitudinal) acoustic modes follow √{ω } (1 /√{ω } ) dependence at the low-frequency limit, whereas those for the degenerate flexural modes asymptotically approach a constant value. In addition, the group velocities of phonons are reduced compared with bulk Si except for low-frequency phonons (<1 -2 THz depending on the thickness of the nanowires). The increased scattering rates combined with reduced group velocities lead to a severely reduced thermal conductivity contribution from the high-frequency phonons. Although the thermal conductivity contributed by those phonons with low frequencies is instead increased mainly due to the increased PDOS, the total thermal conductivity is still reduced compared to that of the bulk. This work reveals an unexplored mechanism to understand the measured ultralow thermal conductivity of silicon nanowires.

  5. Quality assurance in proton beam therapy using a plastic scintillator and a commercially available digital camera.

    PubMed

    Almurayshid, Mansour; Helo, Yusuf; Kacperek, Andrzej; Griffiths, Jennifer; Hebden, Jem; Gibson, Adam

    2017-09-01

    In this article, we evaluate a plastic scintillation detector system for quality assurance in proton therapy using a BC-408 plastic scintillator, a commercial camera, and a computer. The basic characteristics of the system were assessed in a series of proton irradiations. The reproducibility and response to changes of dose, dose-rate, and proton energy were determined. Photographs of the scintillation light distributions were acquired, and compared with Geant4 Monte Carlo simulations and with depth-dose curves measured with an ionization chamber. A quenching effect was observed at the Bragg peak of the 60 MeV proton beam where less light was produced than expected. We developed an approach using Birks equation to correct for this quenching. We simulated the linear energy transfer (LET) as a function of depth in Geant4 and found Birks constant by comparing the calculated LET and measured scintillation light distribution. We then used the derived value of Birks constant to correct the measured scintillation light distribution for quenching using Geant4. The corrected light output from the scintillator increased linearly with dose. The system is stable and offers short-term reproducibility to within 0.80%. No dose rate dependency was observed in this work. This approach offers an effective way to correct for quenching, and could provide a method for rapid, convenient, routine quality assurance for clinical proton beams. Furthermore, the system has the advantage of providing 2D visualization of individual radiation fields, with potential application for quality assurance of complex, time-varying fields. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  6. A modified potential for HO2 with spectroscopic accuracy

    NASA Astrophysics Data System (ADS)

    Brandão, João; Rio, Carolina M. A.; Tennyson, Jonathan

    2009-04-01

    Seven ground state potential energy surfaces for the hydroperoxyl radical are compared. The potentials were determined from either high-quality ab initio calculations, fits to spectroscopic data, or a combination of the two approaches. Vibration-rotation calculations are performed on each potential and the results compared with experiment. None of the available potentials is entirely satisfactory although the best spectroscopic results are obtained using the Morse oscillator rigid bender internal dynamics potential [Bunker et al., J. Mol. Spectrosc. 155, 44 (1992)]. We present modifications of the double many-body expansion IV potential of Pastrana et al. [J. Chem. Phys. 94, 8093 (1990)]. These new potentials reproduce the observed vibrational levels and observed vibrational levels and rotational constants, respectively, while preserving the good global properties of the original potential.

  7. Comparative study of flare control laws. [optimal control of b-737 aircraft approach and landing

    NASA Technical Reports Server (NTRS)

    Nadkarni, A. A.; Breedlove, W. J., Jr.

    1979-01-01

    A digital 3-D automatic control law was developed to achieve an optimal transition of a B-737 aircraft between various initial glid slope conditions and the desired final touchdown condition. A discrete, time-invariant, optimal, closed-loop control law presented for a linear regulator problem, was extended to include a system being acted upon by a constant disturbance. Two forms of control laws were derived to solve this problem. One method utilized the feedback of integral states defined appropriately and augmented with the original system equations. The second method formulated the problem as a control variable constraint, and the control variables were augmented with the original system. The control variable constraint control law yielded a better performance compared to feedback control law for the integral states chosen.

  8. Lie symmetries of systems of second-order linear ordinary differential equations with constant coefficients.

    PubMed

    Boyko, Vyacheslav M; Popovych, Roman O; Shapoval, Nataliya M

    2013-01-01

    Lie symmetries of systems of second-order linear ordinary differential equations with constant coefficients are exhaustively described over both the complex and real fields. The exact lower and upper bounds for the dimensions of the maximal Lie invariance algebras possessed by such systems are obtained using an effective algebraic approach.

  9. Lie symmetries of systems of second-order linear ordinary differential equations with constant coefficients

    PubMed Central

    Boyko, Vyacheslav M.; Popovych, Roman O.; Shapoval, Nataliya M.

    2013-01-01

    Lie symmetries of systems of second-order linear ordinary differential equations with constant coefficients are exhaustively described over both the complex and real fields. The exact lower and upper bounds for the dimensions of the maximal Lie invariance algebras possessed by such systems are obtained using an effective algebraic approach. PMID:23564972

  10. The impact of desorption kinetics from albumin on hepatic extraction efficiency and hepatic clearance: a model study.

    PubMed

    Krause, Sophia; Goss, Kai-Uwe

    2018-05-23

    Until now, the question whether slow desorption of compounds from transport proteins like the plasma protein albumin can affect hepatic uptake and thereby hepatic metabolism of these compounds has not yet been answered conclusively. This work now combines recently published experimental desorption rate constants with a liver model to address this question. For doing so, the used liver model differentiates the bound compound in blood, the unbound compound in blood and the compound within the hepatocytes as three well-stirred compartments. Our calculations show that slow desorption kinetics from albumin can indeed limit hepatic metabolism of a compound by decreasing hepatic extraction efficiency and hepatic clearance. The extent of this decrease, however, depends not only on the value of the desorption rate constant but also on how much of the compound is bound to albumin in blood and how fast intrinsic metabolism of the compound in the hepatocytes is. For strongly sorbing and sufficiently fast metabolized compounds, our calculations revealed a twentyfold lower hepatic extraction efficiency and hepatic clearance for the slowest known desorption rate constant compared to the case when instantaneous equilibrium between bound and unbound compound is assumed. The same desorption rate constant, however, has nearly no effect on hepatic extraction efficiency and hepatic clearance of weakly sorbing and slowly metabolized compounds. This work examines the relevance of desorption kinetics in various example scenarios and provides the general approach needed to quantify the effect of flow limitation, membrane permeability and desorption kinetics on hepatic metabolism at the same time.

  11. Applying Agrep to r-NSA to solve multiple sequences approximate matching.

    PubMed

    Ni, Bing; Wong, Man-Hon; Lam, Chi-Fai David; Leung, Kwong-Sak

    2014-01-01

    This paper addresses the approximate matching problem in a database consisting of multiple DNA sequences, where the proposed approach applies Agrep to a new truncated suffix array, r-NSA. The construction time of the structure is linear to the database size, and the computations of indexing a substring in the structure are constant. The number of characters processed in applying Agrep is analysed theoretically, and the theoretical upper-bound can approximate closely the empirical number of characters, which is obtained through enumerating the characters in the actual structure built. Experiments are carried out using (synthetic) random DNA sequences, as well as (real) genome sequences including Hepatitis-B Virus and X-chromosome. Experimental results show that, compared to the straight-forward approach that applies Agrep to multiple sequences individually, the proposed approach solves the matching problem in much shorter time. The speed-up of our approach depends on the sequence patterns, and for highly similar homologous genome sequences, which are the common cases in real-life genomes, it can be up to several orders of magnitude.

  12. path integral approach to closed form pricing formulas in the Heston framework.

    NASA Astrophysics Data System (ADS)

    Lemmens, Damiaan; Wouters, Michiel; Tempere, Jacques; Foulon, Sven

    2008-03-01

    We present a path integral approach for finding closed form formulas for option prices in the framework of the Heston model. The first model for determining option prices was the Black-Scholes model, which assumed that the logreturn followed a Wiener process with a given drift and constant volatility. To provide a realistic description of the market, the Black-Scholes results must be extended to include stochastic volatility. This is achieved by the Heston model, which assumes that the volatility follows a mean reverting square root process. Current applications of the Heston model are hampered by the unavailability of fast numerical methods, due to a lack of closed-form formulae. Therefore the search for closed form solutions is an essential step before the qualitatively better stochastic volatility models will be used in practice. To attain this goal we outline a simplified path integral approach yielding straightforward results for vanilla Heston options with correlation. Extensions to barrier options and other path-dependent option are discussed, and the new derivation is compared to existing results obtained from alternative path-integral approaches (Dragulescu, Kleinert).

  13. Packet loss mitigation for biomedical signals in healthcare telemetry.

    PubMed

    Garudadri, Harinath; Baheti, Pawan K

    2009-01-01

    In this work, we propose an effective application layer solution for packet loss mitigation in the context of Body Sensor Networks (BSN) and healthcare telemetry. Packet losses occur due to many reasons including excessive path loss, interference from other wireless systems, handoffs, congestion, system loading, etc. A call for action is in order, as packet losses can have extremely adverse impact on many healthcare applications relying on BAN and WAN technologies. Our approach for packet loss mitigation is based on Compressed Sensing (CS), an emerging signal processing concept, wherein significantly fewer sensor measurements than that suggested by Shannon/Nyquist sampling theorem can be used to recover signals with arbitrarily fine resolution. We present simulation results demonstrating graceful degradation of performance with increasing packet loss rate. We also compare the proposed approach with retransmissions. The CS based packet loss mitigation approach was found to maintain up to 99% beat-detection accuracy at packet loss rates of 20%, with a constant latency of less than 2.5 seconds.

  14. Thermal Damage Analysis in Biological Tissues Under Optical Irradiation: Application to the Skin

    NASA Astrophysics Data System (ADS)

    Fanjul-Vélez, Félix; Ortega-Quijano, Noé; Solana-Quirós, José Ramón; Arce-Diego, José Luis

    2009-07-01

    The use of optical sources in medical praxis is increasing nowadays. In this study, different approaches using thermo-optical principles that allow us to predict thermal damage in irradiated tissues are analyzed. Optical propagation is studied by means of the radiation transport theory (RTT) equation, solved via a Monte Carlo analysis. Data obtained are included in a bio-heat equation, solved via a numerical finite difference approach. Optothermal properties are considered for the model to be accurate and reliable. Thermal distribution is calculated as a function of optical source parameters, mainly optical irradiance, wavelength and exposition time. Two thermal damage models, the cumulative equivalent minutes (CEM) 43 °C approach and the Arrhenius analysis, are used. The former is appropriate when dealing with dosimetry considerations at constant temperature. The latter is adequate to predict thermal damage with arbitrary temperature time dependence. Both models are applied and compared for the particular application of skin thermotherapy irradiation.

  15. Efficient Embedded Decoding of Neural Network Language Models in a Machine Translation System.

    PubMed

    Zamora-Martinez, Francisco; Castro-Bleda, Maria Jose

    2018-02-22

    Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on [Formula: see text]-best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and [Formula: see text]-gram-based systems, showing that the integrated approach seems more promising for [Formula: see text]-gram-based systems, even with nonfull-quality NNLMs.

  16. An Alternative Approach to the Extended Drude Model

    NASA Astrophysics Data System (ADS)

    Gantzler, N. J.; Dordevic, S. V.

    2018-05-01

    The original Drude model, proposed over a hundred years ago, is still used today for the analysis of optical properties of solids. Within this model, both the plasma frequency and quasiparticle scattering rate are constant, which makes the model rather inflexible. In order to circumvent this problem, the so-called extended Drude model was proposed, which allowed for the frequency dependence of both the quasiparticle scattering rate and the effective mass. In this work we will explore an alternative approach to the extended Drude model. Here, one also assumes that the quasiparticle scattering rate is frequency dependent; however, instead of the effective mass, the plasma frequency becomes frequency-dependent. This alternative model is applied to the high Tc superconductor Bi2Sr2CaCu2O8+δ (Bi2212) with Tc = 92 K, and the results are compared and contrasted with the ones obtained from the conventional extended Drude model. The results point to several advantages of this alternative approach to the extended Drude model.

  17. Characterization of Carbonates by Spectral Induced Polarization

    NASA Astrophysics Data System (ADS)

    Hupfer, Sarah; Halisch, Matthias; Weller, Andreas

    2017-04-01

    This study investigates the complex electrical conductivity of carbonate samples by Spectral Induced Polarization (SIP). The analysis is conducted in combination with petrophysical, mineralogical and geochemical measurements. SIP is a useful tool to obtain more detailed information about rock properties and receive a more qualitative pore space characterization. Rock parameters like permeability, pore-size and -surface area can be predicted. Up to this point, sandstones or sandy materials were investigated in detail by laboratory SIP-measurements. Several robust empirical relationships were found that connect IP-signals and petrophysical parameters (surface area, surface conductivity and cation exchange capacity). Different types of carbonates were analyzed with laboratory SIP-measurements. Rock properties like grain density, porosity, permeability and surface area were determined by petrophysical measurements. Geochemistry and mineralogy were used to differentiate the carbonate types. First results of the SIP-measurements showed polarization effects for all different types. Four different phase behavior were observed in the phase spectra. A constant phase angle, a constant slope, a combination of both and a maximum type could be identified. Each phase behavior can be assigned to the specific carbonate type used, but the constant phase occurs at two carbonate types. Further experiments were conducted to get more insight the phase behavior and get explanations. 1. Approach: An expected phase peak frequency for each sample was calculated to check if this frequency is within the measured spectrum of 2 mHz to 100 Hz. 2. Approach: Significantly reducing of the fluid conductivity to increase phase signal for a better interpretation. 3. Approach: The cation-exchange-capacity (CEC) was regarded as a factor as well. A dependence between imaginary part of conductivity and CEC was detected. 4. Approach: Imaging procedures (scanning electron microscope, x-ray computed tomography, microscopy) were used to create a qualitative image of the carbonate samples and to investigate the pore space, for example the ratio of connected to non-connected pore space. A comparison between SIP data and the petrophysical data of the sample set showed that the phase behavior of carbonates is highly complicated and challenging compared with sandstones. It seems that there is no correlation between polarization effects and any petrophysical parameter. Ongoing investigations and measurements will be conducted to get more insight to the polarization effects of carbonates.

  18. Modeling transitions in body composition: the approach to steady state for anthropometric measures and physiological functions in the Minnesota human starvation study

    PubMed Central

    Hargrove, James L; Heinz, Grete; Heinz, Otto

    2008-01-01

    Background This study evaluated whether the changes in several anthropometric and functional measures during caloric restriction combined with walking and treadmill exercise would fit a simple model of approach to steady state (a plateau) that can be solved using spreadsheet software (Microsoft Excel®). We hypothesized that transitions in waist girth and several body compartments would fit a simple exponential model that approaches a stable steady-state. Methods The model (an equation) was applied to outcomes reported in the Minnesota starvation experiment using Microsoft Excel's Solver® function to derive rate parameters (k) and projected steady state values. However, data for most end-points were available only at t = 0, 12 and 24 weeks of caloric restriction. Therefore, we derived 2 new equations that enable model solutions to be calculated from 3 equally spaced data points. Results For the group of male subjects in the Minnesota study, body mass declined with a first order rate constant of about 0.079 wk-1. The fractional rate of loss of fat free mass, which includes components that remained almost constant during starvation, was 0.064 wk-1, compared to a rate of loss of fat mass of 0.103 wk-1. The rate of loss of abdominal fat, as exemplified by the change in the waist girth, was 0.213 wk-1. On average, 0.77 kg was lost per cm of waist girth. Other girths showed rates of loss between 0.085 and 0.131 wk-1. Resting energy expenditure (REE) declined at 0.131 wk-1. Changes in heart volume, hand strength, work capacity and N excretion showed rates of loss in the same range. The group of 32 subjects was close to steady state or had already reached steady state for the variables under consideration at the end of semi-starvation. Conclusion When energy intake is changed to new, relatively constant levels, while physical activity is maintained, changes in several anthropometric and physiological measures can be modeled as an exponential approach to steady state using software that is widely available. The 3 point method for parameter estimation provides a criterion for testing whether change in a variable can be usefully modelled with exponential kinetics within the time range for which data are available. PMID:18840293

  19. Statistical inference of dynamic resting-state functional connectivity using hierarchical observation modeling.

    PubMed

    Sojoudi, Alireza; Goodyear, Bradley G

    2016-12-01

    Spontaneous fluctuations of blood-oxygenation level-dependent functional magnetic resonance imaging (BOLD fMRI) signals are highly synchronous between brain regions that serve similar functions. This provides a means to investigate functional networks; however, most analysis techniques assume functional connections are constant over time. This may be problematic in the case of neurological disease, where functional connections may be highly variable. Recently, several methods have been proposed to determine moment-to-moment changes in the strength of functional connections over an imaging session (so called dynamic connectivity). Here a novel analysis framework based on a hierarchical observation modeling approach was proposed, to permit statistical inference of the presence of dynamic connectivity. A two-level linear model composed of overlapping sliding windows of fMRI signals, incorporating the fact that overlapping windows are not independent was described. To test this approach, datasets were synthesized whereby functional connectivity was either constant (significant or insignificant) or modulated by an external input. The method successfully determines the statistical significance of a functional connection in phase with the modulation, and it exhibits greater sensitivity and specificity in detecting regions with variable connectivity, when compared with sliding-window correlation analysis. For real data, this technique possesses greater reproducibility and provides a more discriminative estimate of dynamic connectivity than sliding-window correlation analysis. Hum Brain Mapp 37:4566-4580, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Energy Expenditure of Trotting Gait Under Different Gait Parameters

    NASA Astrophysics Data System (ADS)

    Chen, Xian-Bao; Gao, Feng

    2017-07-01

    Robots driven by batteries are clean, quiet, and can work indoors or in space. However, the battery endurance is a great problem. A new gait parameter design energy saving strategy to extend the working hours of the quadruped robot is proposed. A dynamic model of the robot is established to estimate and analyze the energy expenditures during trotting. Given a trotting speed, optimal stride frequency and stride length can minimize the energy expenditure. However, the relationship between the speed and the optimal gait parameters is nonlinear, which is difficult for practical application. Therefore, a simplified gait parameter design method for energy saving is proposed. A critical trotting speed of the quadruped robot is found and can be used to decide the gait parameters. When the robot is travelling lower than this speed, it is better to keep a constant stride length and change the cycle period. When the robot is travelling higher than this speed, it is better to keep a constant cycle period and change the stride length. Simulations and experiments on the quadruped robot show that by using the proposed gait parameter design approach, the energy expenditure can be reduced by about 54% compared with the 100 mm stride length under 500 mm/s speed. In general, an energy expenditure model based on the gait parameter of the quadruped robot is built and the trotting gait parameters design approach for energy saving is proposed.

  1. The Constant Comparative Analysis Method Outside of Grounded Theory

    ERIC Educational Resources Information Center

    Fram, Sheila M.

    2013-01-01

    This commentary addresses the gap in the literature regarding discussion of the legitimate use of Constant Comparative Analysis Method (CCA) outside of Grounded Theory. The purpose is to show the strength of using CCA to maintain the emic perspective and how theoretical frameworks can maintain the etic perspective throughout the analysis. My…

  2. A new method for detecting velocity shifts and distortions between optical spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Tyler M.; Murphy, Michael T., E-mail: tevans@astro.swin.edu.au

    2013-12-01

    Recent quasar spectroscopy from the Very Large Telescope (VLT) and Keck suggests that fundamental constants may not actually be constant. To better confirm or refute this result, systematic errors between telescopes must be minimized. We present a new method to directly compare spectra of the same object and measure any velocity shifts between them. This method allows for the discovery of wavelength-dependent velocity shifts between spectra, i.e., velocity distortions, that could produce spurious detections of cosmological variations in fundamental constants. This 'direct comparison' method has several advantages over alternative techniques: it is model-independent (cf. line-fitting approaches), blind, in that spectralmore » features do not need to be identified beforehand, and it produces meaningful uncertainty estimates for the velocity shift measurements. In particular, we demonstrate that, when comparing echelle-resolution spectra with unresolved absorption features, the uncertainty estimates are reliable for signal-to-noise ratios ≳7 per pixel. We apply this method to spectra of quasar J2123–0050 observed with Keck and the VLT and find no significant distortions over long wavelength ranges (∼1050 Å) greater than ≈180 m s{sup –1}. We also find no evidence for systematic velocity distortions within echelle orders greater than 500 m s{sup –1}. Moreover, previous constraints on cosmological variations in the proton-electron mass ratio should not have been affected by velocity distortions in these spectra by more than 4.0 ± 4.2 parts per million. This technique may also find application in measuring stellar radial velocities in search of extra-solar planets and attempts to directly observe the expansion history of the universe using quasar absorption spectra.« less

  3. Comparison of volatility function technique for risk-neutral densities estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  4. Improved direct torque control of an induction generator used in a wind conversion system connected to the grid.

    PubMed

    Abdelli, Radia; Rekioua, Djamila; Rekioua, Toufik; Tounzi, Abdelmounaïm

    2013-07-01

    This paper presents a modulated hysteresis direct torque control (MHDTC) applied to an induction generator (IG) used in wind energy conversion systems (WECs) connected to the electrical grid through a back-to-back converter. The principle of this strategy consists in superposing to the torque reference a triangular signal, as in the PWM strategy, with the desired switching frequency. This new modulated reference is compared to the estimated torque by using a hysteresis controller as in the classical direct torque control (DTC). The aim of this new approach is to lead to a constant frequency and low THD in grid current with a unit power factor and a minimum voltage variation despite the wind variation. To highlight the effectiveness of the proposed method, a comparison was made with classical DTC and field oriented control method (FOC). The obtained simulation results, with a variable wind profile, show an adequate dynamic of the conversion system using the proposed method compared to the classical approaches. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  5. A Kalman filter approach for the determination of celestial reference frames

    NASA Astrophysics Data System (ADS)

    Soja, Benedikt; Gross, Richard; Jacobs, Christopher; Chin, Toshio; Karbon, Maria; Nilsson, Tobias; Heinkelmann, Robert; Schuh, Harald

    2017-04-01

    The coordinate model of radio sources in International Celestial Reference Frames (ICRF), such as the ICRF2, has traditionally been a constant offset. While sufficient for a large part of radio sources considering current accuracy requirements, several sources exhibit significant temporal coordinate variations. In particular, the group of the so-called special handling sources is characterized by large fluctuations in the source positions. For these sources and for several from the "others" category of radio sources, a coordinate model that goes beyond a constant offset would be beneficial. However, due to the sheer amount of radio sources in catalogs like the ICRF2, and even more so with the upcoming ICRF3, it is difficult to find the most appropriate coordinate model for every single radio source. For this reason, we have developed a time series approach to the determination of celestial reference frames (CRF). We feed the radio source coordinates derived from single very long baseline interferometry (VLBI) sessions sequentially into a Kalman filter and smoother, retaining their full covariances. The estimation of the source coordinates is carried out with a temporal resolution identical to the input data, i.e. usually 1-4 days. The coordinates are assumed to behave like random walk processes, an assumption which has already successfully been made for the determination of terrestrial reference frames such as the JTRF2014. To be able to apply the most suitable process noise value for every single radio source, their statistical properties are analyzed by computing their Allan standard deviations (ADEV). Additional to the determination of process noise values, the ADEV allows drawing conclusions whether the variations in certain radio source positions significantly deviate from random walk processes. Our investigations also deal with other means of source characterization, such as the structure index, in order to derive a suitable process noise model. The Kalman filter CRFs resulting from the different approaches are compared among each other, to the original radio source position time series, as well as to a traditional CRF solution, in which the constant source positions are estimated in a global least squares adjustment.

  6. Co-solvent effects on reaction rate and reaction equilibrium of an enzymatic peptide hydrolysis.

    PubMed

    Wangler, A; Canales, R; Held, C; Luong, T Q; Winter, R; Zaitsau, D H; Verevkin, S P; Sadowski, G

    2018-04-25

    This work presents an approach that expresses the Michaelis constant KaM and the equilibrium constant Kth of an enzymatic peptide hydrolysis based on thermodynamic activities instead of concentrations. This provides KaM and Kth values that are independent of any co-solvent. To this end, the hydrolysis reaction of N-succinyl-l-phenylalanine-p-nitroanilide catalysed by the enzyme α-chymotrypsin was studied in pure buffer and in the presence of the co-solvents dimethyl sulfoxide, trimethylamine-N-oxide, urea, and two salts. A strong influence of the co-solvents on the measured Michaelis constant (KM) and equilibrium constant (Kx) was observed, which was found to be caused by molecular interactions expressed as activity coefficients. Substrate and product activity coefficients were used to calculate the activity-based values KaM and Kth for the co-solvent free reaction. Based on these constants, the co-solvent effect on KM and Kx was predicted in almost quantitative agreement with the experimental data. The approach presented here does not only reveal the importance of understanding the thermodynamic non-ideality of reactions taking place in biological solutions and in many technological applications, it also provides a framework for interpreting and quantifying the multifaceted co-solvent effects on enzyme-catalysed reactions that are known and have been observed experimentally for a long time.

  7. Method for accurate determination of dissociation constants of optical ratiometric systems: chemical probes, genetically encoded sensors, and interacting molecules.

    PubMed

    Pomorski, Adam; Kochańczyk, Tomasz; Miłoch, Anna; Krężel, Artur

    2013-12-03

    Ratiometric chemical probes and genetically encoded sensors are of high interest for both analytical chemists and molecular biologists. Their high sensitivity toward the target ligand and ability to obtain quantitative results without a known sensor concentration have made them a very useful tool in both in vitro and in vivo assays. Although ratiometric sensors are widely used in many applications, their successful and accurate usage depends on how they are characterized in terms of sensing target molecules. The most important feature of probes and sensors besides their optical parameters is an affinity constant toward analyzed molecules. The literature shows that different analytical approaches are used to determine the stability constants, with the ratio approach being most popular. However, oversimplification and lack of attention to detail results in inaccurate determination of stability constants, which in turn affects the results obtained using these sensors. Here, we present a new method where ratio signal is calibrated for borderline values of intensities of both wavelengths, instead of borderline ratio values that generate errors in many studies. At the same time, the equation takes into account the cooperativity factor or fluorescence artifacts and therefore can be used to characterize systems with various stoichiometries and experimental conditions. Accurate determination of stability constants is demonstrated utilizing four known optical ratiometric probes and sensors, together with a discussion regarding other, currently used methods.

  8. Strategies for improving production performance of probiotic Pediococcus acidilactici viable cell by overcoming lactic acid inhibition.

    PubMed

    Othman, Majdiah; Ariff, Arbakariya B; Wasoh, Helmi; Kapri, Mohd Rizal; Halim, Murni

    2017-11-27

    Lactic acid bacteria are industrially important microorganisms recognized for fermentative ability mostly in their probiotic benefits as well as lactic acid production for various applications. Fermentation conditions such as concentration of initial glucose in the culture, concentration of lactic acid accumulated in the culture, types of pH control strategy, types of aeration mode and different agitation speed had influenced the cultivation performance of batch fermentation of Pediococcus acidilactici. The maximum viable cell concentration obtained in constant fed-batch fermentation at a feeding rate of 0.015 L/h was 6.1 times higher with 1.6 times reduction in lactic acid accumulation compared to batch fermentation. Anion exchange resin, IRA 67 was found to have the highest selectivity towards lactic acid compared to other components studied. Fed-batch fermentation of P. acidilactici coupled with lactic acid removal system using IRA 67 resin showed 55.5 and 9.1 times of improvement in maximum viable cell concentration compared to fermentation without resin for batch and fed-batch mode respectively. The improvement of the P. acidilactici growth in the constant fed-batch fermentation indicated the use of minimal and simple process control equipment is an effective approach for reducing by-product inhibition. Further improvement in the cultivation performance of P. acidilactici in fed-bath fermentation with in situ addition of anion-exchange resin significantly helped to enhance the growth of P. acidilactici by reducing the inhibitory effect of lactic acid and thus increasing probiotic production.

  9. Viscous constraints on predator:food size ratios in microscale feeding

    NASA Astrophysics Data System (ADS)

    Jabbarzadeh, Mehdi; Fu, Henry

    2014-11-01

    Small organisms such as protists or copepods may try to capture food by manipulating food with cilia, limbs, or feeding appendages. At these small scales, viscous flow may complicate the ability of a feeding appendage to closely approach a food particle. As a simplified but tractable model of such feeding approach, we consider the problem of two spheres approaching in a Stokes fluid. The first ``feeding'' sphere, which represents a body part or feeding appendage, is pushed with a constant force towards a force-free ``food'' sphere. When the feeding sphere reaches within a cutoff distance of the food sphere we assume that nonhydrodynamic interactions lead to capture. We examine approach for a range of size ratios between the feeding and food sphere. To investigate the approach efficiency, we examine the time required for the feeding sphere to capture the food sphere, as well as how far the feeding sphere must move before it captures the food sphere. We also examine the effect of varying the cutoff distance for capture. We find that hydrodynamic interactions strongly affect the results when the size of the spheres is comparable. We describe what relative sizes between feeding sphere and food particles may be most effective for food capture.

  10. Analytical Prediction of Lower Leg Injury in a Vehicular Mine Blast Event

    DTIC Science & Technology

    2010-01-01

    the spring constant of the tibia is nearly arbitrary; the spring constant of the boot assumes a hard ethylene propylene diene monomer ( EPDM ) rubber ...the sole of the boot. The significantly lower spring constant of the EPDM rubber in the sole compared to the bone structures greatly diminished the

  11. A voxelwise approach to determine consensus regions-of-interest for the study of brain network plasticity.

    PubMed

    Rajtmajer, Sarah M; Roy, Arnab; Albert, Reka; Molenaar, Peter C M; Hillary, Frank G

    2015-01-01

    Despite exciting advances in the functional imaging of the brain, it remains a challenge to define regions of interest (ROIs) that do not require investigator supervision and permit examination of change in networks over time (or plasticity). Plasticity is most readily examined by maintaining ROIs constant via seed-based and anatomical-atlas based techniques, but these approaches are not data-driven, requiring definition based on prior experience (e.g., choice of seed-region, anatomical landmarks). These approaches are limiting especially when functional connectivity may evolve over time in areas that are finer than known anatomical landmarks or in areas outside predetermined seeded regions. An ideal method would permit investigators to study network plasticity due to learning, maturation effects, or clinical recovery via multiple time point data that can be compared to one another in the same ROI while also preserving the voxel-level data in those ROIs at each time point. Data-driven approaches (e.g., whole-brain voxelwise approaches) ameliorate concerns regarding investigator bias, but the fundamental problem of comparing the results between distinct data sets remains. In this paper we propose an approach, aggregate-initialized label propagation (AILP), which allows for data at separate time points to be compared for examining developmental processes resulting in network change (plasticity). To do so, we use a whole-brain modularity approach to parcellate the brain into anatomically constrained functional modules at separate time points and then apply the AILP algorithm to form a consensus set of ROIs for examining change over time. To demonstrate its utility, we make use of a known dataset of individuals with traumatic brain injury sampled at two time points during the first year of recovery and show how the AILP procedure can be applied to select regions of interest to be used in a graph theoretical analysis of plasticity.

  12. Inflation with a constant rate of roll

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi, E-mail: motohashi@kicp.uchicago.edu, E-mail: alstar@landau.ac.ru, E-mail: yokoyama@resceu.s.u-tokyo.ac.jp

    2015-09-01

    We consider an inflationary scenario where the rate of inflaton roll defined by {sup ··}φ/H φ-dot remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs formore » unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime.« less

  13. Strain Distribution in REBCO-Coated Conductors Bent With the Constant-Perimeter Geometry

    DOE PAGES

    Wang, Xiaorong; Arbelaez, Diego; Caspi, Shlomo; ...

    2017-10-24

    Here, cable and magnet applications require bending REBa 2Cu 3O 7-δ (REBCO, RE = rare earth) tapes around a former to carry high current or generate specific magnetic fields. With a high aspect ratio, REBCO tapes favor the bending along their broad surfaces (easy way) than their thin edges (hard way). The easy-way bending forms can be effectively determined by the constant-perimeter method that was developed in the 1970s to fabricate accelerator magnets with flat thin conductors. The method, however, does not consider the strain distribution in the REBCO layer that can result from bending. Therefore, the REBCO layer canmore » be overstrained and damaged even if it is bent in an easy way as determined by the constant-perimeter method. To address this issue, we developed a numerical approach to determine the strain in the REBCO layer using the local curvatures of the tape neutral plane. Two orthogonal strain components are determined: the axial component along the tape length and the transverse component along the tape width. These two components can be used to determine the conductor critical current after bending. The approach is demonstrated with four examples relevant for applications: a helical form for cables, forms for canted cos θ dipole and quadrupole magnets, and a form for the coil end design. The approach allows us to optimize the design of REBCO cables and magnets based on the constant-perimeter geometry and to reduce the strain-induced critical current degradation.« less

  14. Strain Distribution in REBCO-Coated Conductors Bent With the Constant-Perimeter Geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xiaorong; Arbelaez, Diego; Caspi, Shlomo

    Here, cable and magnet applications require bending REBa 2Cu 3O 7-δ (REBCO, RE = rare earth) tapes around a former to carry high current or generate specific magnetic fields. With a high aspect ratio, REBCO tapes favor the bending along their broad surfaces (easy way) than their thin edges (hard way). The easy-way bending forms can be effectively determined by the constant-perimeter method that was developed in the 1970s to fabricate accelerator magnets with flat thin conductors. The method, however, does not consider the strain distribution in the REBCO layer that can result from bending. Therefore, the REBCO layer canmore » be overstrained and damaged even if it is bent in an easy way as determined by the constant-perimeter method. To address this issue, we developed a numerical approach to determine the strain in the REBCO layer using the local curvatures of the tape neutral plane. Two orthogonal strain components are determined: the axial component along the tape length and the transverse component along the tape width. These two components can be used to determine the conductor critical current after bending. The approach is demonstrated with four examples relevant for applications: a helical form for cables, forms for canted cos θ dipole and quadrupole magnets, and a form for the coil end design. The approach allows us to optimize the design of REBCO cables and magnets based on the constant-perimeter geometry and to reduce the strain-induced critical current degradation.« less

  15. Comparing otoacoustic emissions evoked by chirp transients with constant absorbed sound power and constant incident pressure magnitude

    PubMed Central

    Keefe, Douglas H.; Feeney, M. Patrick; Hunter, Lisa L.; Fitzpatrick, Denis F.

    2017-01-01

    Human ear-canal properties of transient acoustic stimuli are contrasted that utilize measured ear-canal pressures in conjunction with measured acoustic pressure reflectance and admittance. These data are referenced to the tip of a probe snugly inserted into the ear canal. Promising procedures to calibrate across frequency include stimuli with controlled levels of incident pressure magnitude, absorbed sound power, and forward pressure magnitude. An equivalent pressure at the eardrum is calculated from these measured data using a transmission-line model of ear-canal acoustics parameterized by acoustically estimated ear-canal area at the probe tip and length between the probe tip and eardrum. Chirp stimuli with constant incident pressure magnitude and constant absorbed sound power across frequency were generated to elicit transient-evoked otoacoustic emissions (TEOAEs), which were measured in normal-hearing adult ears from 0.7 to 8 kHz. TEOAE stimuli had similar peak-to-peak equivalent sound pressure levels across calibration conditions. Frequency-domain TEOAEs were compared using signal level, signal-to-noise ratio (SNR), coherence synchrony modulus (CSM), group delay, and group spread. Time-domain TEOAEs were compared using SNR, CSM, instantaneous frequency and instantaneous bandwidth. Stimuli with constant incident pressure magnitude or constant absorbed sound power across frequency produce generally similar TEOAEs up to 8 kHz. PMID:28147608

  16. Piezo-optic tensor of crystals from quantum-mechanical calculations.

    PubMed

    Erba, A; Ruggiero, M T; Korter, T M; Dovesi, R

    2015-10-14

    An automated computational strategy is devised for the ab initio determination of the full fourth-rank piezo-optic tensor of crystals belonging to any space group of symmetry. Elastic stiffness and compliance constants are obtained as numerical first derivatives of analytical energy gradients with respect to the strain and photo-elastic constants as numerical derivatives of analytical dielectric tensor components, which are in turn computed through a Coupled-Perturbed-Hartree-Fock/Kohn-Sham approach, with respect to the strain. Both point and translation symmetries are exploited at all steps of the calculation, within the framework of periodic boundary conditions. The scheme is applied to the determination of the full set of ten symmetry-independent piezo-optic constants of calcium tungstate CaWO4, which have recently been experimentally reconstructed. Present calculations unambiguously determine the absolute sign (positive) of the π61 constant, confirm the reliability of 6 out of 10 experimentally determined constants and provide new, more accurate values for the remaining 4 constants.

  17. Piezo-optic tensor of crystals from quantum-mechanical calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erba, A., E-mail: alessandro.erba@unito.it; Dovesi, R.; Ruggiero, M. T.

    2015-10-14

    An automated computational strategy is devised for the ab initio determination of the full fourth-rank piezo-optic tensor of crystals belonging to any space group of symmetry. Elastic stiffness and compliance constants are obtained as numerical first derivatives of analytical energy gradients with respect to the strain and photo-elastic constants as numerical derivatives of analytical dielectric tensor components, which are in turn computed through a Coupled-Perturbed-Hartree-Fock/Kohn-Sham approach, with respect to the strain. Both point and translation symmetries are exploited at all steps of the calculation, within the framework of periodic boundary conditions. The scheme is applied to the determination of themore » full set of ten symmetry-independent piezo-optic constants of calcium tungstate CaWO{sub 4}, which have recently been experimentally reconstructed. Present calculations unambiguously determine the absolute sign (positive) of the π{sub 61} constant, confirm the reliability of 6 out of 10 experimentally determined constants and provide new, more accurate values for the remaining 4 constants.« less

  18. A centroid molecular dynamics study of liquid para-hydrogen and ortho-deuterium.

    PubMed

    Hone, Tyler D; Voth, Gregory A

    2004-10-01

    Centroid molecular dynamics (CMD) is applied to the study of collective and single-particle dynamics in liquid para-hydrogen at two state points and liquid ortho-deuterium at one state point. The CMD results are compared with the results of classical molecular dynamics, quantum mode coupling theory, a maximum entropy analytic continuation approach, pair-product forward- backward semiclassical dynamics, and available experimental results. The self-diffusion constants are in excellent agreement with the experimental measurements for all systems studied. Furthermore, it is shown that the method is able to adequately describe both the single-particle and collective dynamics of quantum liquids. (c) 2004 American Institute of Physics

  19. Crime prediction modeling

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.

  20. Which model based on fluorescence quenching is suitable to study the interaction between trans-resveratrol and BSA?

    NASA Astrophysics Data System (ADS)

    Wei, Xin Lin; Xiao, Jian Bo; Wang, Yuanfeng; Bai, Yalong

    2010-01-01

    There are several models by means of quenching fluorescence of BSA to determine the binding parameters. The binding parameters obtained from different models are quite different from each other. Which model is suitable to study the interaction between trans-resveratrol and BSA? Herein, twelve models based fluorescence quenching of BSA were compared. The number of binding sites increasing with increased binding constant for similar compounds binding to BSA maybe one approach to resolve this question. For example, here eleven flavonoids were tested to illustrate that the double logarithm regression curve is suitable to study binding polyphenols to BSA.

  1. A full set of langatate high-temperature acoustic wave constants: elastic, piezoelectric, dielectric constants up to 900°C.

    PubMed

    Davulis, Peter M; da Cunha, Mauricio Pereira

    2013-04-01

    A full set of langatate (LGT) elastic, dielectric, and piezoelectric constants with their respective temperature coefficients up to 900°C is presented, and the relevance of the dielectric and piezoelectric constants and temperature coefficients are discussed with respect to predicted and measured high-temperature SAW propagation properties. The set of constants allows for high-temperature acoustic wave (AW) propagation studies and device design. The dielectric constants and polarization and conductive losses were extracted by impedance spectroscopy of parallel-plate capacitors. The measured dielectric constants at high temperatures were combined with previously measured LGT expansion coefficients and used to determine the elastic and piezoelectric constants using resonant ultrasound spectroscopy (RUS) measurements at temperatures up to 900°C. The extracted LGT piezoelectric constants and temperature coefficients show that e11 and e14 change by up to 62% and 77%, respectively, for the entire 25°C to 900°C range when compared with room-temperature values. The LGT high-temperature constants and temperature coefficients were verified by comparing measured and predicted phase velocities (vp) and temperature coefficients of delay (TCD) of SAW delay lines fabricated along 6 orientations in the LGT plane (90°, 23°, Ψ) up to 900°C. For the 6 tested orientations, the predicted SAW vp agree within 0.2% of the measured vp on average and the calculated TCD is within 9.6 ppm/°C of the measured value on average over the temperature range of 25°C to 900°C. By including the temperature dependence of both dielectric and piezoelectric constants, the average discrepancies between predicted and measured SAW properties were reduced, on average: 77% for vp, 13% for TCD, and 63% for the turn-over temperatures analyzed.

  2. Synthesis of Optimal Constant-Gain Positive-Real Controllers for Passive Systems

    NASA Technical Reports Server (NTRS)

    Mao, Y.; Kelkar, A. G.; Joshi, S. M.

    1999-01-01

    This paper presents synthesis methods for the design of constant-gain positive real controllers for passive systems. The results presented in this paper, in conjunction with the previous work by the authors on passification of non-passive systems, offer a useful synthesis tool for the design of passivity-based robust controllers for non-passive systems as well. Two synthesis approaches are given for minimizing an LQ-type performance index, resulting in optimal controller gains. Two separate algorithms, one for each of these approaches, are given. The synthesis techniques are demonstrated using two numerical examples: control of a flexible structure and longitudinal control of a fighter aircraft.

  3. Spectral editing of weakly coupled spins using variable flip angles in PRESS constant echo time difference spectroscopy: Application to GABA

    NASA Astrophysics Data System (ADS)

    Snyder, Jeff; Hanstock, Chris C.; Wilman, Alan H.

    2009-10-01

    A general in vivo magnetic resonance spectroscopy editing technique is presented to detect weakly coupled spin systems through subtraction, while preserving singlets through addition, and is applied to the specific brain metabolite γ-aminobutyric acid (GABA) at 4.7 T. The new method uses double spin echo localization (PRESS) and is based on a constant echo time difference spectroscopy approach employing subtraction of two asymmetric echo timings, which is normally only applicable to strongly coupled spin systems. By utilizing flip angle reduction of one of the two refocusing pulses in the PRESS sequence, we demonstrate that this difference method may be extended to weakly coupled systems, thereby providing a very simple yet effective editing process. The difference method is first illustrated analytically using a simple two spin weakly coupled spin system. The technique was then demonstrated for the 3.01 ppm resonance of GABA, which is obscured by the strong singlet peak of creatine in vivo. Full numerical simulations, as well as phantom and in vivo experiments were performed. The difference method used two asymmetric PRESS timings with a constant total echo time of 131 ms and a reduced 120° final pulse, providing 25% GABA yield upon subtraction compared to two short echo standard PRESS experiments. Phantom and in vivo results from human brain demonstrate efficacy of this method in agreement with numerical simulations.

  4. Characteristics of time-varying intracranial pressure on blood flow through cerebral artery: A fluid-structure interaction approach.

    PubMed

    Syed, Hasson; Unnikrishnan, Vinu U; Olcmen, Semih

    2016-02-01

    Elevated intracranial pressure is a major contributor to morbidity and mortality in severe head injuries. Wall shear stresses in the artery can be affected by increased intracranial pressures and may lead to the formation of cerebral aneurysms. Earlier research on cerebral arteries and aneurysms involves using constant mean intracranial pressure values. Recent advancements in intracranial pressure monitoring techniques have led to measurement of the intracranial pressure waveform. By incorporating a time-varying intracranial pressure waveform in place of constant intracranial pressures in the analysis of cerebral arteries helps in understanding their effects on arterial deformation and wall shear stress. To date, such a robust computational study on the effect of increasing intracranial pressures on the cerebral arterial wall has not been attempted to the best of our knowledge. In this work, fully coupled fluid-structure interaction simulations are carried out to investigate the effect of the variation in intracranial pressure waveforms on the cerebral arterial wall. Three different time-varying intracranial pressure waveforms and three constant intracranial pressure profiles acting on the cerebral arterial wall are analyzed and compared with specified inlet velocity and outlet pressure conditions. It has been found that the arterial wall experiences deformation depending on the time-varying intracranial pressure waveforms, while the wall shear stress changes at peak systole for all the intracranial pressure profiles. © IMechE 2015.

  5. Near constant-time optimal piecewise LDR to HDR inverse tone mapping

    NASA Astrophysics Data System (ADS)

    Chen, Qian; Su, Guan-Ming; Yin, Peng

    2015-02-01

    In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.

  6. Examination of Effective Dielectric Constants Derived from Non-Spherical Melting Hydrometeor

    NASA Astrophysics Data System (ADS)

    Liao, L.; Meneghini, R.

    2009-04-01

    The bright band, a layer of enhanced radar echo associated with melting hydrometeors, is often observed in stratiform rain. Understanding the microphysical properties of melting hydrometeors and their scattering and propagation effects is of great importance in accurately estimating parameters of the precipitation from spaceborne radar and radiometers. However, one of the impediments in the study of the radar signature of the melting layer is the determination of effective dielectric constants of melting hydrometeors. Although a number of mixing formulas are available to compute the effective dielectric constants, their results vary to a great extent when water is a component of the mixture, such as in the case of melting snow. It is also physically unclear as to how to select among these various formulas. Furthermore, the question remains as to whether these mixing formulas can be applied to computations of radar polarimetric parameters from non-spherical melting particles. Recently, several approaches using numerical methods have been developed to derive the effective dielectric constants of melting hydrometeors, i.e., mixtures consisting of air, ice and water, based on more realistic melting models of particles, in which the composition of the melting hydrometeor is divided into a number of identical cells. Each of these cells is then assigned in a probabilistic way to be water, ice or air according to the distribution of fractional water contents for a particular particle. While the derived effective dielectric constants have been extensively tested at various wavelengths over a range of particle sizes, these numerical experiments have been restricted to the co-polarized scattering parameters from spherical particles. As polarimetric radar has been increasingly used in the study of microphysical properties of hydrometeors, an extension of the theory to polarimetric variables should provide additional information on melting processes. To account for polarimetric radar measurements from melting hydrometeors, it is necessary to move away from the restriction that the melting particles are spherical. In this study, our primary focus is on the derivation of the effective dielectric constants of non-spherical particles that are mixtures of ice and water. The computational model for the ice-water particle is described by a collection of 128x128x128 cubic cells of identical size. Because of the use of such a high-resolution model, the particles can be described accurately not only with regard to shape but with respect to structure as well. The Cartesian components of the mean internal electric field of particles, which are used to infer the effective dielectric constants, are calculated at each cell by the use of the Conjugate Gradient-Fast Fourier Transform (CG-FFT) numerical method. In this work we first check the validity of derived effective dielectric constant from a non-spherical mixed phase particle by comparing the polarimetric scattering parameters of an ice-water spheroid obtained from the CGFFT to those computed from the T-matrix for a homogeneous particle with the same geometry as that of the mixed phase particle (such as size, shape and orientation) and with an effective dielectric constant derived from the internal field of the mixed-phase particle. The accuracy of the effective dielectric constant can be judged by whether the scattering parameters of interest can accurately reproduce those of the exact solution, i.e., the T-matrix results. The purpose of defining an effective dielectric constant is to reduce the complexity of the scattering calculations in the sense that the effective dielectric constant, once obtained, may be applicable to a range of particle sizes, shapes and orientations. Conversely, if a different effective dielectric constant is needed for each particle size or shape, then its utility would be marginal. Having verified that the effective dielectric constant defined for a particular particle with a fixed shape, size, and orientation is valid, a check is performed to see if this effective dielectric constant can be used to characterize a class of particle types (with arbitrary sizes, shapes and orientations) if the fractional ice-water contents of melting particles remain the same. Among the scattering and polarimatric parameters used for examination of effective dielectric constant in this study, are the radar backscattering, extinction and scattering coefficients, asymmetry factor, differential reflectivity factor (ZDR), phase shift and linear polarization ratio (LDR). The goal is to determine whether the effective dielectric constant approach provides a means to compute accurately the radar polarimetric scattering parameters and radiometer brightness temperature quantities from the melting layer in a relatively simple and efficient way.

  7. Constant amplitude and post-overload fatigue crack growth behavior in PM aluminum alloy AA 8009

    NASA Technical Reports Server (NTRS)

    Reynolds, A. P.

    1992-01-01

    A recently developed, rapidly solidified, powder metallurgy, dispersion strengthened aluminum alloy, AA 8009, was fatigue tested at room temperature in lab air. Constant amplitude/constant delta kappa and single spike overload conditions were examined. High fatigue crack growth rates and low crack closure levels compared to typical ingot metallurgy aluminum alloys were observed. It was proposed that minimal crack roughness, crack path delection, and limited slip reversibility, resulting from ultra-fine microstructure, were responsible for the relatively poor da/dN-delta kappa performance of AA 8009 as compared to that of typical IM aluminum alloys.

  8. Constant amplitude and post-overload fatigue crack growth behavior in PM aluminum alloy AA 8009

    NASA Technical Reports Server (NTRS)

    Reynolds, A. P.

    1991-01-01

    A recently developed, rapidly solidified, powder metallurgy, dispersion strengthened aluminum alloy, AA 8009, was fatigue tested at room temperature in lab air. Constant amplitude/constant delta kappa and single spike overload conditions were examined. High fatigue crack growth rates and low crack closure levels compared to typical ingot metallurgy aluminum alloys were observed. It was proposed that minimal crack roughness, crack path deflection, and limited slip reversibility, resulting from ultra-fine microstructure, were responsible for the relatively poor da/dN-delta kappa performance of AA 8009 as compared to that of typical IM aluminum alloys.

  9. A Variational Statistical-Field Theory for Polar Liquid Mixtures

    NASA Astrophysics Data System (ADS)

    Zhuang, Bilin; Wang, Zhen-Gang

    Using a variational field-theoretic approach, we derive a molecularly-based theory for polar liquid mixtures. The resulting theory consists of simple algebraic expressions for the free energy of mixing and the dielectric constant as functions of mixture composition. Using only the dielectric constants and the molar volumes of the pure liquid constituents, the theory evaluates the mixture dielectric constants in good agreement with the experimental values for a wide range of liquid mixtures, without using adjustable parameters. In addition, the theory predicts that liquids with similar dielectric constants and molar volumes dissolve well in each other, while sufficient disparity in these parameters result in phase separation. The calculated miscibility map on the dielectric constant-molar volume axes agrees well with known experimental observations for a large number of liquid pairs. Thus the theory provides a quantification for the well-known empirical ``like-dissolves-like'' rule. Bz acknowledges the A-STAR fellowship for the financial support.

  10. Wormholes and the cosmological constant problem.

    NASA Astrophysics Data System (ADS)

    Klebanov, I.

    The author reviews the cosmological constant problem and the recently proposed wormhole mechanism for its solution. Summation over wormholes in the Euclidean path integral for gravity turns all the coupling parameters into dynamical variables, sampled from a probability distribution. A formal saddle point analysis results in a distribution with a sharp peak at the cosmological constant equal to zero, which appears to solve the cosmological constant problem. He discusses the instabilities of the gravitational Euclidean path integral and the difficulties with its interpretation. He presents an alternate formalism for baby universes, based on the "third quantization" of the Wheeler-De Witt equation. This approach is analyzed in a minisuperspace model for quantum gravity, where it reduces to simple quantum mechanics. Once again, the coupling parameters become dynamical. Unfortunately, the a priori probability distribution for the cosmological constant and other parameters is typically a smooth function, with no sharp peaks.

  11. The Effects of Varied versus Constant High-, Medium-, and Low-Preference Stimuli on Performance

    ERIC Educational Resources Information Center

    Wine, Byron; Wilder, David A.

    2009-01-01

    The purpose of the current study was to compare the delivery of varied versus constant high-, medium-, and low-preference stimuli on performance of 2 adults on a computer-based task in an analogue employment setting. For both participants, constant delivery of the high-preference stimulus produced the greatest increases in performance over…

  12. High-level theoretical study of the reaction between hydroxyl and ammonia: Accurate rate constants from 200 to 2500 K

    NASA Astrophysics Data System (ADS)

    Nguyen, Thanh Lam; Stanton, John F.

    2017-10-01

    Hydrogen abstraction from NH3 by OH to produce H2O and NH2—an important reaction in combustion of NH3 fuel—was studied with a theoretical approach that combines high level quantum chemistry and advanced chemical kinetics methods. Thermal rate constants calculated from first principles agree well (within 5%-20%) with available experimental data over a temperature range that extends from 200 to 2500 K. Quantum mechanical tunneling effects were found to be important; they lead to a decided curvature and non-Arrhenius behavior for the rate constant.

  13. High-level theoretical study of the reaction between hydroxyl and ammonia: Accurate rate constants from 200 to 2500 K.

    PubMed

    Nguyen, Thanh Lam; Stanton, John F

    2017-10-21

    Hydrogen abstraction from NH 3 by OH to produce H 2 O and NH 2 -an important reaction in combustion of NH 3 fuel-was studied with a theoretical approach that combines high level quantum chemistry and advanced chemical kinetics methods. Thermal rate constants calculated from first principles agree well (within 5%-20%) with available experimental data over a temperature range that extends from 200 to 2500 K. Quantum mechanical tunneling effects were found to be important; they lead to a decided curvature and non-Arrhenius behavior for the rate constant.

  14. Variation of fundamental constants on sub- and super-Hubble scales: From the equivalence principle to the multiverse

    NASA Astrophysics Data System (ADS)

    Uzan, Jean-Philippe

    2013-02-01

    Fundamental constants play a central role in many modern developments in gravitation and cosmology. Most extensions of general relativity lead to the conclusion that dimensionless constants are actually dynamical fields. Any detection of their variation on sub-Hubble scales would signal a violation of the Einstein equivalence principle and hence a lead to gravity beyond general relativity. On super-Hubble scales, or maybe should we say on super-universe scales, such variations are invoked as a solution to the fine-tuning problem, in connection with an anthropic approach.

  15. Determination of calibration constants for the hole-drilling residual stress measurement technique applied to orthotropic composites. I - Theoretical considerations

    NASA Technical Reports Server (NTRS)

    Prasad, C. B.; Prabhakaran, R.; Tompkins, S.

    1987-01-01

    The hole-drilling technique for the measurement of residual stresses using electrical resistance strain gages has been widely used for isotropic materials and has been adopted by the ASTM as a standard method. For thin isotropic plates, with a hole drilled through the thickness, the idealized hole-drilling calibration constants are obtained by making use of the well-known Kirsch's solution. In this paper, an analogous attempt is made to theoretically determine the three idealized hole-drilling calibration constants for thin orthotropic materials by employing Savin's (1961) complex stress function approach.

  16. Scale/Analytical Analyses of Freezing and Convective Melting with Internal Heat Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali S. Siahpush; John Crepeau; Piyush Sabharwall

    2013-07-01

    Using a scale/analytical analysis approach, we model phase change (melting) for pure materials which generate constant internal heat generation for small Stefan numbers (approximately one). The analysis considers conduction in the solid phase and natural convection, driven by internal heat generation, in the liquid regime. The model is applied for a constant surface temperature boundary condition where the melting temperature is greater than the surface temperature in a cylindrical geometry. The analysis also consider constant heat flux (in a cylindrical geometry).We show the time scales in which conduction and convection heat transfer dominate.

  17. Microtextured Surfaces for Turbine Blade Impingement Cooling

    NASA Technical Reports Server (NTRS)

    Fryer, Jack

    2014-01-01

    Gas turbine engine technology is constantly challenged to operate at higher combustor outlet temperatures. In a modern gas turbine engine, these temperatures can exceed the blade and disk material limits by 600 F or more, necessitating both internal and film cooling schemes in addition to the use of thermal barrier coatings. Internal convective cooling is inadequate in many blade locations, and both internal and film cooling approaches can lead to significant performance penalties in the engine. Micro Cooling Concepts, Inc., has developed a turbine blade cooling concept that provides enhanced internal impingement cooling effectiveness via the use of microstructured impingement surfaces. These surfaces significantly increase the cooling capability of the impinging flow, as compared to a conventional untextured surface. This approach can be combined with microchannel cooling and external film cooling to tailor the cooling capability per the external heating profile. The cooling system then can be optimized to minimize impact on engine performance.

  18. Practical characterization of quantum devices without tomography

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Flammia, Steven; Silva, Marcus; Liu, Yi-Kai; Poulin, David

    2012-02-01

    Quantum tomography is the main method used to assess the quality of quantum information processing devices, but its complexity presents a major obstacle for the characterization of even moderately large systems. Part of the reason for this complexity is that tomography generates much more information than is usually sought. Taking a more targeted approach, we develop schemes that enable (i) estimating the ?delity of an experiment to a theoretical ideal description, (ii) learning which description within a reduced subset best matches the experimental data. Both these approaches yield a signi?cant reduction in resources compared to tomography. In particular, we show how to estimate the ?delity between a predicted pure state and an arbitrary experimental state using only a constant number of Pauli expectation values selected at random according to an importance-weighting rule. In addition, we propose methods for certifying quantum circuits and learning continuous-time quantum dynamics that are described by local Hamiltonians or Lindbladians.

  19. PHOTONICS AND NANOTECHNOLOGY Microscopic theory of optical properties of composite media with chaotically distributed nanoparticles

    NASA Astrophysics Data System (ADS)

    Shalin, A. S.

    2010-12-01

    The boundary problem of light reflection and transmission by a film with chaotically distributed nanoinclusions is considered. Based on the proposed microscopic approach, analytic expressions are derived for distributions inside and outside the nanocomposite medium. Good agreement of the results with exact calculations and (at low concentrations of nanoparticles) with the integral Maxwell-Garnett effective-medium theory is demonstrated. It is shown that at high nanoparticle concentrations, averaging the dielectric constant in volume as is done within the framework of the effective-medium theory yields overestimated values of the optical film density compared to the values yielded by the proposed microscopic approach. We also studied the dependence of the reflectivity of a system of gold nanoparticles on their size, the size dependence of the plasmon resonance position along the wavelength scale, and demonstrated a good agreement with experimental data.

  20. Magnetic Resonance Fingerprinting

    PubMed Central

    Ma, Dan; Gulani, Vikas; Seiberlich, Nicole; Liu, Kecheng; Sunshine, Jeffrey L.; Duerk, Jeffrey L.; Griswold, Mark A.

    2013-01-01

    Summary Magnetic Resonance (MR) is an exceptionally powerful and versatile measurement technique. The basic structure of an MR experiment has remained nearly constant for almost 50 years. Here we introduce a novel paradigm, Magnetic Resonance Fingerprinting (MRF) that permits the non-invasive quantification of multiple important properties of a material or tissue simultaneously through a new approach to data acquisition, post-processing and visualization. MRF provides a new mechanism to quantitatively detect and analyze complex changes that can represent physical alterations of a substance or early indicators of disease. MRF can also be used to specifically identify the presence of a target material or tissue, which will increase the sensitivity, specificity, and speed of an MR study, and potentially lead to new diagnostic testing methodologies. When paired with an appropriate pattern recognition algorithm, MRF inherently suppresses measurement errors and thus can improve accuracy compared to previous approaches. PMID:23486058

  1. Dielectric constant extraction of graphene nanostructured on SiC substrates from spectroscopy ellipsometry measurement using Gauss–Newton inversion method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maulina, Hervin; Santoso, Iman, E-mail: iman.santoso@ugm.ac.id; Subama, Emmistasega

    2016-04-19

    The extraction of the dielectric constant of nanostructured graphene on SiC substrates from spectroscopy ellipsometry measurement using the Gauss-Newton inversion (GNI) method has been done. This study aims to calculate the dielectric constant and refractive index of graphene by extracting the value of ψ and Δ from the spectroscopy ellipsometry measurement using GNI method and comparing them with previous result which was extracted using Drude-Lorentz (DL) model. The results show that GNI method can be used to calculate the dielectric constant and refractive index of nanostructured graphene on SiC substratesmore faster as compared to DL model. Moreover, the imaginary partmore » of the dielectric constant values and coefficient of extinction drastically increases at 4.5 eV similar to that of extracted using known DL fitting. The increase is known due to the process of interband transition and the interaction between the electrons and electron-hole at M-points in the Brillouin zone of graphene.« less

  2. From Planck Constant to Isomorphicity Through Justice Paradox

    NASA Astrophysics Data System (ADS)

    Hidajatullah-Maksoed, Widastra

    2015-05-01

    Robert E. Scott in his ``Chaos theory and the Justice Paradox'', William & Mary Law Review, v 35, I 1, 329 (1993) wrotes''...As we approach the 21-st Century, the signs of social disarray are everywhere. Social critics observe the breakdown of core structure - the nuclear family, schools, neighborhoods & political groups''. For completions for ``soliton'' first coined by Morikazu TODA, comparing the ``Soliton on Scott-Russell aqueduct on the Union Canal near Heriot-WATT University, July 12, 1995 to Michael Stock works: ``a Fine WATT-Balance: Determination of Planck constant & Redefinition of Kilogram'', January 2011, we can concludes the inherencies between `chaos' & `soliton'. Further through ``string theory'' from Michio KAKU sought statements from Peter Mayr: Stringy world brane & Exponential hierarchy'', JHEP 11 (2000): ``if the 5-brane is embedded in flat 10-D space time, the 6-D Planck mass on the brane is infinite'' who also describes the relation of isomorphicity & ``string theory'', from whom denotes the smart city. Replace this text with your abstract body. Incredible acknowledgments to HE. Mr. Drs. P. SWANTORO & HE. Mr. Dr-HC Jakob OETAMA.

  3. Improved Accuracy of Low Affinity Protein-Ligand Equilibrium Dissociation Constants Directly Determined by Electrospray Ionization Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Jaquillard, Lucie; Saab, Fabienne; Schoentgen, Françoise; Cadene, Martine

    2012-05-01

    There is continued interest in the determination by ESI-MS of equilibrium dissociation constants (KD) that accurately reflect the affinity of a protein-ligand complex in solution. Issues in the measurement of KD are compounded in the case of low affinity complexes. Here we present a KD measurement method and corresponding mathematical model dealing with both gas-phase dissociation (GPD) and aggregation. To this end, a rational mathematical correction of GPD (fsat) is combined with the development of an experimental protocol to deal with gas-phase aggregation. A guide to apply the method to noncovalent protein-ligand systems according to their kinetic behavior is provided. The approach is validated by comparing the KD values determined by this method with in-solution KD literature values. The influence of the type of molecular interactions and instrumental setup on fsat is examined as a first step towards a fine dissection of factors affecting GPD. The method can be reliably applied to a wide array of low affinity systems without the need for a reference ligand or protein.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dang, Liem X.; Vo, Quynh N.; Nilsson, Mikael

    We report one of the first simulations using a classical rate theory approach to predict the mechanism of the exchange process between water and aqueous uranyl ions. Using our water and ion-water polarizable force fields and molecular dynamics techniques, we computed the potentials of mean force for the uranyl ion-water pair as the function of pressures at ambient temperature. Subsequently, these simulated potentials of mean force were used to calculate rate constants using the transition rate theory; the time dependent transmission coefficients were also examined using the reactive flux method and Grote-Hynes treatments of the dynamic response of the solvent.more » The computed activation volumes using transition rate theory and the corrected rate constants are positive, thus the mechanism of this particular water-exchange is a dissociative process. We discuss our rate theory results and compare them with previously studies in which non-polarizable force fields were used. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences. The calculations were carried out using computer resources provided by the Office of Basic Energy Sciences.« less

  5. Resonant vibrational-excitation cross sections and rate constants for low-energy electron scattering by molecular oxygen

    NASA Astrophysics Data System (ADS)

    Laporta, V.; Celiberto, R.; Tennyson, J.

    2013-04-01

    Resonant vibrational-excitation cross sections and rate constants for electron scattering by molecular oxygen are presented. Transitions between all 42 vibrational levels of O_2({X}\\, ^3\\Sigma_g^{-}) are considered. Molecular rotations are parametrized by the rotational quantum number J, which is considered in the range 1-151. The lowest four resonant states of O_2^- , 2Πg, 2Πu, ^4\\Sigma_u^- and ^2\\Sigma_u^- are taken into account. The calculations are performed using the fixed-nuclei R-matrix approach to determine the resonance positions and widths, and the boomerang model to characterize the nuclei motion. Two energy regions below and above 4 eV are investigated: the first one is characterized by sharp structures in the cross section and the second by a broad resonance peaked at 10 eV. The computed cross sections are compared with theoretical and experimental results available in the literature for both energy regions, and are made available for use by modelers. The effect of including rotational motion is found to be non-negligible.

  6. Structural phase transition of BeTe: an ab initio molecular dynamics study.

    PubMed

    Alptekin, Sebahaddin

    2017-08-11

    Beryllium telluride (BeTe) with cubic zinc-blende (ZB) structure was studied using ab initio constant pressure method under high pressure. The ab initio molecular dynamics (MD) approach for constant pressure was studied and it was found that the first order phase transition occurs from the ZB structure to the nickel arsenide (NiAs) structure. It has been shown that the MD simulation predicts the transition pressure P T more than the value obtained by the static enthalpy and experimental data. The structural pathway reveals MD simulation such as cubic → tetragonal → orthorhombic → monoclinic → orthorhombic → hexagonal, leading the ZB to NiAs phase. The phase transformation is accompanied by a 10% volume drop and at 80 GPa is likely to be around 35 GPa in the experiment. In the present study, our obtained values can be compared with the experimental and theoretical results. Graphical abstract The energy-volume relation and ZB phase for the BeTe.

  7. Non-musicians also have a piano in the head: evidence for spatial-musical associations from line bisection tracking.

    PubMed

    Hartmann, Matthias

    2017-02-01

    The spatial representation of ordinal sequences (numbers, time, tones) seems to be a fundamental cognitive property. While an automatic association between horizontal space and pitch height (left-low pitch, right-high pitch) is constantly reported in musicians, the evidence for such an association in non-musicians is mixed. In this study, 20 non-musicians performed a line bisection task while listening to irrelevant high- and low-pitched tones and white noise (control condition). While pitch height had no influence on the final bisection point, participants' movement trajectories showed systematic biases: When approaching the line and touching the line for the first time (initial bisection point), the mouse cursor was directed more rightward for high-pitched tones compared to low-pitched tones and noise. These results show that non-musicians also have a subtle but nevertheless automatic association between pitch height and the horizontal space. This suggests that spatial-musical associations do not necessarily depend on constant sensorimotor experiences (as it is the case for musicians) but rather reflect the seemingly inescapable tendency to represent ordinal information on a horizontal line.

  8. A new model for impregnation mechanisms in different GF/PP commingled yarns

    NASA Astrophysics Data System (ADS)

    Klinkmüller, V.; Um, M.-K.; Steffens, M.; Friedrich, K.; Kim, B.-S.

    1994-09-01

    Impregnation mechanisms of different kinds of GF/PP commingled yarns have been studied. As the reinforcing fibres were always the same, a global description has been worked out. Two different mathematical approaches for fibre bed permeability (Kozeny-Carman and Gutowski) were compared. The constants of the applied mathematical models have to stay the same if the fibre reeinforcement and the fibre arrangement is the same. Neither the kind of matrix, nor the fibre volume content may change these constants. Differences in the degree of impregnation after the same process conditions can be only due to different sizes of fibre agglomerations, thus the initial distribution of reinforcing fibres and matrix. For an exact determination of impregnation times and conditions the exact distribution of fibres in the intermediate material and after processing has to be known. This distribution is determined by SEM microscopy and data given from the material supplier. The importance of different process parameters, such as temperature, pressure, processing time is weighted by determining the density and mechanical properties of the specimens.

  9. Effect of bottom electrode on dielectric property of sputtered-(Ba,Sr)TiO{sub 3} films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ito, Shinichi; Yamada, Tomoaki; Takahashi, Kenji

    2009-03-15

    (Ba{sub 0.5}Sr{sub 0.5})TiO{sub 3} (BST) films were deposited on (111)Pt/TiO{sub 2}/SiO{sub 2}/Al{sub 2}O{sub 3} substrates by rf sputtering. By inserting a thin layer of SrRuO{sub 3} in between BST film and (111)Pt electrode, the BST films grew fully (111)-oriented without any other orientations. In addition, it enables us to reduce the growth temperature of BST films while keeping the dielectric constant and tunability as high as those of BST films directly deposited on Pt at higher temperatures. The dielectric loss of the films on SrRuO{sub 3}-top substrates was comparable to that on Pt-top substrates for the same level of dielectricmore » constant. The results suggest that the SrRuO{sub 3} thin layer on (111)Pt electrode is an effective approach to growing highly crystalline BST films with (111) orientation at lower deposition temperatures.« less

  10. Airloads Correlation of the UH-60A Rotor Inside the 40- by 80-Foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Chang, I-Chung; Norman, Thomas R.; Romander, Ethan A.

    2013-01-01

    The presented research validates the capability of a loosely-coupled computational fluid dynamics (CFD) and comprehensive rotorcraft analysis (CRA) code to calculate the flowfield around a rotor and test stand mounted inside a wind tunnel. The CFD/CRA predictions for the full-scale UH-60A Airloads Rotor inside the National Full-Scale Aerodynamics Complex (NFAC) 40- by 80-Foot Wind Tunnel at NASA Ames Research Center are compared with the latest measured airloads and performance data. The studied conditions include a speed sweep at constant lift up to an advance ratio of 0.4 and a thrust sweep at constant speed up to and including stall. For the speed sweep, wind tunnel modeling becomes important at advance ratios greater than 0.37 and test stand modeling becomes increasingly important as the advance ratio increases. For the thrust sweep, both the wind tunnel and test stand modeling become important as the rotor approaches stall. Despite the beneficial effects of modeling the wind tunnel and test stand, the new models do not completely resolve the current airload discrepancies between prediction and experiment.

  11. REVIEW ARTICLE: Phonons and magnetoelectric interactions in Ni3V2O8

    NASA Astrophysics Data System (ADS)

    Yildirim, T.; Vergara, L. I.; Íñiguez, Jorge; Musfeldt, J. L.; Harris, A. B.; Rogado, N.; Cava, R. J.; Yen, F.; Chaudhury, R. P.; Lorenz, B.

    2008-10-01

    We present a detailed study of the zone-center phonons and magnetoelectric interactions in Ni3V2O8. Using combined neutron scattering, polarized infrared (IR) measurements and first-principles LDA+U calculations, we successfully assigned all IR-active modes, including eleven B2u phonons which can induce the observed spontaneous electric polarization. We also calculated the Born-effective charges and the IR intensities which are in surprisingly good agreement with the experimental data. Among the eleven B2u phonons, we find that only a few of them can actually induce a significant dipole moment. The exchange interactions up to a cutoff of 6.5 Å are also calculated within the LDA+U approach with different values of U for Ni, V and O atoms. We find that LSDA (i.e. U = 0) gives excellent results concerning the optimized atomic positions, bandgap and phonon energies. However, the magnitudes of the exchange constants are too large compared to the experimental Curie-Weiss constant, Θ. Including U for Ni corrects the magnitude of the superexchange constants but opens a too large electronic bandgap. We observe that including correlation at the O site is very important to get simultaneously the correct phonon energies, bandgap and exchange constants. In particular, the nearest and next-nearest exchange constants along the Ni-spine sites result in incommensurate spin ordering with a wavevector that is consistent with the experimental data. Our results also explain how the antiferromagnetic coupling between sublattices in the b and c directions is consistent with the relatively small observed value of Θ. We also find that, regardless of the values of U used, we always get the same five exchange constants that are significantly larger than the rest. Finally, we discuss how the B2u phonons and the spin structure combine to yield the observed spontaneous polarization. We present a simple phenomenological model which shows how trilinear (and quartic) couplings of one (or two) phonons to two spin operators perturbatively affects the magnon (i.e. electromagnon) and phonon energies.

  12. Preload-Based Starling-Like Control for Rotary Blood Pumps: Numerical Comparison with Pulsatility Control and Constant Speed Operation

    PubMed Central

    Mansouri, Mahdi; Salamonsen, Robert F.; Lim, Einly; Akmeliawati, Rini; Lovell, Nigel H.

    2015-01-01

    In this study, we evaluate a preload-based Starling-like controller for implantable rotary blood pumps (IRBPs) using left ventricular end-diastolic pressure (PLVED) as the feedback variable. Simulations are conducted using a validated mathematical model. The controller emulates the response of the natural left ventricle (LV) to changes in PLVED. We report the performance of the preload-based Starling-like controller in comparison with our recently designed pulsatility controller and constant speed operation. In handling the transition from a baseline state to test states, which include vigorous exercise, blood loss and a major reduction in the LV contractility (LVC), the preload controller outperformed pulsatility control and constant speed operation in all three test scenarios. In exercise, preload-control achieved an increase of 54% in mean pump flow (QP-) with minimum loading on the LV, while pulsatility control achieved only a 5% increase in flow and a decrease in mean pump speed. In a hemorrhage scenario, the preload control maintained the greatest safety margin against LV suction. PLVED for the preload controller was 4.9 mmHg, compared with 0.4 mmHg for the pulsatility controller and 0.2 mmHg for the constant speed mode. This was associated with an adequate mean arterial pressure (MAP) of 84 mmHg. In transition to low LVC, QP- for preload control remained constant at 5.22 L/min with a PLVED of 8.0 mmHg. With regards to pulsatility control, QP- fell to the nonviable level of 2.4 L/min with an associated PLVED of 16 mmHg and a MAP of 55 mmHg. Consequently, pulsatility control was deemed inferior to constant speed mode with a PLVED of 11 mmHg and a QP- of 5.13 L/min in low LVC scenario. We conclude that pulsatility control imposes a danger to the patient in the severely reduced LVC scenario, which can be overcome by using a preload-based Starling-like control approach. PMID:25849979

  13. Molecular dynamics simulations of classical sound absorption in a monatomic gas

    NASA Astrophysics Data System (ADS)

    Ayub, M.; Zander, A. C.; Huang, D. M.; Cazzolato, B. S.; Howard, C. Q.

    2018-05-01

    Sound wave propagation in argon gas is simulated using molecular dynamics (MD) in order to determine the attenuation of acoustic energy due to classical (viscous and thermal) losses at high frequencies. In addition, a method is described to estimate attenuation of acoustic energy using the thermodynamic concept of exergy. The results are compared against standing wave theory and the predictions of the theory of continuum mechanics. Acoustic energy losses are studied by evaluating various attenuation parameters and by comparing the changes in behavior at three different frequencies. This study demonstrates acoustic absorption effects in a gas simulated in a thermostatted molecular simulation and quantifies the classical losses in terms of the sound attenuation constant. The approach can be extended to further understanding of acoustic loss mechanisms in the presence of nanoscale porous materials in the simulation domain.

  14. Acidity in DMSO from the embedded cluster integral equation quantum solvation model.

    PubMed

    Heil, Jochen; Tomazic, Daniel; Egbers, Simon; Kast, Stefan M

    2014-04-01

    The embedded cluster reference interaction site model (EC-RISM) is applied to the prediction of acidity constants of organic molecules in dimethyl sulfoxide (DMSO) solution. EC-RISM is based on a self-consistent treatment of the solute's electronic structure and the solvent's structure by coupling quantum-chemical calculations with three-dimensional (3D) RISM integral equation theory. We compare available DMSO force fields with reference calculations obtained using the polarizable continuum model (PCM). The results are evaluated statistically using two different approaches to eliminating the proton contribution: a linear regression model and an analysis of pK(a) shifts for compound pairs. Suitable levels of theory for the integral equation methodology are benchmarked. The results are further analyzed and illustrated by visualizing solvent site distribution functions and comparing them with an aqueous environment.

  15. Cosmological measure with volume averaging and the vacuum energy problem

    NASA Astrophysics Data System (ADS)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  16. Nonlinear bulging factor based on R-curve data

    NASA Technical Reports Server (NTRS)

    Jeong, David Y.; Tong, Pin

    1994-01-01

    In this paper, a nonlinear bulging factor is derived using a strain energy approach combined with dimensional analysis. The functional form of the bulging factor contains an empirical constant that is determined using R-curve data from unstiffened flat and curved panel tests. The determination of this empirical constant is based on the assumption that the R-curve is the same for both flat and curved panels.

  17. Lattice dynamics and thermal conductivity of lithium fluoride via first-principles calculations

    NASA Astrophysics Data System (ADS)

    Liang, Ting; Chen, Wen-Qi; Hu, Cui-E.; Chen, Xiang-Rong; Chen, Qi-Feng

    2018-04-01

    The lattice thermal conductivity of lithium fluoride (LiF) is accurately computed from a first-principles approach based on an iterative solution of the Boltzmann transport equation. Real-space finite-difference supercell approach is employed to generate the second- and third-order interatomic force constants. The related physical quantities of LiF are calculated by the second- and third- order potential interactions at 30 K-1000 K. The calculated lattice thermal conductivity 13.89 W/(m K) for LiF at room temperature agrees well with the experimental value, demonstrating that the parameter-free approach can furnish precise descriptions of the lattice thermal conductivity for this material. Besides, the Born effective charges, dielectric constants and phonon spectrum of LiF accord well with the existing data. The lattice thermal conductivities for the iterative solution of BTE are also presented.

  18. Experimental determination of the effective strong coupling constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alexandre Deur; Volker Burkert; Jian-Ping Chen

    2007-07-01

    We extract an effective strong coupling constant from low Q{sup 2} data on the Bjorken sum. Using sum rules, we establish its Q{sup 2}-behavior over the complete Q{sup 2}-range. The result is compared to effective coupling constants extracted from different processes and to calculations based on Schwinger-Dyson equations, hadron spectroscopy or lattice QCD. Although the connection between the experimentally extracted effective coupling constant and the calculations is not clear, the results agree surprisingly well.

  19. Just-in-Time Teaching, Just-in-Need Learning: Designing towards Optimized Pedagogical Outcomes

    ERIC Educational Resources Information Center

    Killi, Steinar; Morrison, Andrew

    2015-01-01

    Teaching methods are constantly being changed, new ones are developed and old methods have undergone a renaissance. Two main approaches to teaching prevail: a) lecture-based and project-based and b) an argumentative approach to known knowledge or learning by exploration. Today, there is a balance between these two approaches, and they are more…

  20. Determination of mass density, dielectric, elastic, and piezoelectric constants of bulk GaN crystal.

    PubMed

    Soluch, Waldemar; Brzozowski, Ernest; Lysakowska, Magdalena; Sadura, Jolanta

    2011-11-01

    Mass density, dielectric, elastic, and piezoelectric constants of bulk GaN crystal were determined. Mass density was obtained from the measured ratio of mass to volume of a cuboid. The dielectric constants were determined from the measured capacitances of an interdigital transducer (IDT) deposited on a Z-cut plate and from a parallel plate capacitor fabricated from this plate. The elastic and piezoelectric constants were determined by comparing the measured and calculated SAW velocities and electromechanical coupling coefficients on the Z- and X-cut plates. The following new constants were obtained: mass density p = 5986 kg/m(3); relative dielectric constants (at constant strain S) ε(S)(11)/ε(0) = 8.6 and ε(S)(11)/ε(0) = 10.5, where ε(0) is a dielectric constant of free space; elastic constants (at constant electric field E) C(E)(11) = 349.7, C(E)(12) = 128.1, C(E)(13) = 129.4, C(E)(33) = 430.3, and C(E)(44) = 96.5 GPa; and piezoelectric constants e(33) = 0.84, e(31) = -0.47, and e(15) = -0.41 C/m(2).

  1. Mass balance approaches for estimating the intestinal absorption and metabolism of peptides and analogues: theoretical development and applications

    NASA Technical Reports Server (NTRS)

    Sinko, P. J.; Leesman, G. D.; Amidon, G. L.

    1993-01-01

    A theoretical analysis for estimating the extent of intestinal peptide and peptide analogue absorption was developed on the basis of a mass balance approach that incorporates convection, permeability, and reaction. The macroscopic mass balance analysis (MMBA) was extended to include chemical and enzymatic degradation. A microscopic mass balance analysis, a numerical approach, was also developed and the results compared to the MMBA. The mass balance equations for the fraction of a drug absorbed and reacted in the tube were derived from the general steady state mass balance in a tube: [formula: see text] where M is mass, z is the length of the tube, R is the tube radius, Pw is the intestinal wall permeability, kr is the reaction rate constant, C is the concentration of drug in the volume element over which the mass balance is taken, VL is the volume of the tube, and vz is the axial velocity of drug. The theory was first applied to the oral absorption of two tripeptide analogues, cefaclor (CCL) and cefatrizine (CZN), which degrade and dimerize in the intestine. Simulations using the mass balance equations, the experimental absorption parameters, and the literature stability rate constants yielded a mean estimated extent of CCL (250-mg dose) and CZN (1000-mg dose) absorption of 89 and 51%, respectively, which was similar to the mean extent of absorption reported in humans (90 and 50%). It was proposed previously that 15% of the CCL dose spontaneously degraded systematically; however, our simulations suggest that significant CCL degradation occurs (8 to 17%) presystemically in the intestinal lumen.(ABSTRACT TRUNCATED AT 250 WORDS).

  2. Evolution of circadian rhythms in Drosophila melanogaster populations reared in constant light and dark regimes for over 330 generations.

    PubMed

    Shindey, Radhika; Varma, Vishwanath; Nikhil, K L; Sharma, Vijay Kumar

    2017-01-01

    Organisms are believed to have evolved circadian clocks as adaptations to deal with cyclic environmental changes, and therefore it has been hypothesized that evolution in constant environments would lead to regression of such clocks. However, previous studies have yielded mixed results, and evolution of circadian clocks under constant conditions has remained an unsettled topic of debate in circadian biology. In continuation of our previous studies, which reported persistence of circadian rhythms in Drosophila melanogaster populations evolving under constant light, here we intended to examine whether circadian clocks and the associated properties evolve differently under constant light and constant darkness. In this regard, we assayed activity-rest, adult emergence and oviposition rhythms of D. melanogaster populations which have been maintained for over 19 years (~330 generations) under three different light regimes - constant light (LL), light-dark cycles of 12:12 h (LD) and constant darkness (DD). We observed that while circadian rhythms in all the three behaviors persist in both LL and DD stocks with no differences in circadian period, they differed in certain aspects of the entrained rhythms when compared to controls reared in rhythmic environment (LD). Interestingly, we also observed that DD stocks have evolved significantly higher robustness or power of free-running activity-rest and adult emergence rhythms compared to LL stocks. Thus, our study, in addition to corroborating previous results of circadian clock evolution in constant light, also highlights that, contrary to the expected regression of circadian clocks, rearing in constant darkness leads to the evolution of more robust circadian clocks which may be attributed to an intrinsic adaptive advantage of circadian clocks and/or pleiotropic functions of clock genes in other traits.

  3. Calculations of atomic magnetic nuclear shielding constants based on the two-component normalized elimination of the small component method

    NASA Astrophysics Data System (ADS)

    Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter

    2017-04-01

    A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σi s o is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment 𝝁 . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)3 (M: number of basis functions).

  4. Practical Algorithms for the Longest Common Extension Problem

    NASA Astrophysics Data System (ADS)

    Ilie, Lucian; Tinta, Liviu

    The Longest Common Extension problem considers a string s and computes, for each of a number of pairs (i,j), the longest substring of s that starts at both i and j. It appears as a subproblem in many fundamental string problems and can be solved by linear-time preprocessing of the string that allows (worst-case) constant-time computation for each pair. The two known approaches use powerful algorithms: either constant-time computation of the Lowest Common Ancestor in trees or constant-time computation of Range Minimum Queries (RMQ) in arrays. We show here that, from practical point of view, such complicated approaches are not needed. We give two very simple algorithms for this problem that require no preprocessing. The first needs only the string and is significantly faster than all previous algorithms on the average. The second combines the first with a direct RMQ computation on the Longest Common Prefix array. It takes advantage of the superior speed of the cache memory and is the fastest on virtually all inputs.

  5. Time-dependent 31P saturation transfer in the phosphoglucomutase reaction. Characterization of the spin system for the Cd(II) enzyme and evaluation of rate constants for the transfer process.

    PubMed

    Post, C B; Ray, W J; Gorenstein, D G

    1989-01-24

    Time-dependent 31P saturation-transfer studies were conducted with the Cd2+-activated form of muscle phosphoglucomutase to probe the origin of the 100-fold difference between its catalytic efficiency (in terms of kcat) and that of the more efficient Mg2+-activated enzyme. The present paper describes the equilibrium mixture of phosphoglucomutase and its substrate/product pair when the concentration of the Cd2+ enzyme approaches that of the substrate and how the nine-spin 31P NMR system provided by this mixture was treated. It shows that the presence of abortive complexes is not a significant factor in the reduced activity of the Cd2+ enzyme since the complex of the dephosphoenzyme and glucose 1,6-bisphosphate, which accounts for a large majority of the enzyme present at equilibrium, is catalytically competent. It also shows that rate constants for saturation transfer obtained at three different ratios of enzyme to free substrate are mutually compatible. These constants, which were measured at chemical equilibrium, can be used to provide a quantitative kinetic rationale for the reduced steady-state activity elicited by Cd2+ relative to Mg2+ [cf. Ray, W.J., Post, C.B., & Puvathingal, J.M. (1989) Biochemistry (following paper in this issue)]. They also provide minimal estimates of 350 and 150 s-1 for the rate constants describing (PO3-) transfer from the Cd2+ phosphoenzyme to the 6-position of bound glucose 1-phosphate and to the 1-position of bound glucose 6-phosphate, respectively. These minimal estimates are compared with analogous estimates for the Mg2+ and Li+ forms of the enzyme in the accompanying paper.

  6. Fluoro-polymer functionalized graphene for flexible ferroelectric polymer-based high-k nanocomposites with suppressed dielectric loss and low percolation threshold.

    PubMed

    Yang, Ke; Huang, Xingyi; Fang, Lijun; He, Jinliang; Jiang, Pingkai

    2014-12-21

    Flexible nanodielectric materials with high dielectric constant and low dielectric loss have huge potential applications in the modern electronic and electric industry. Graphene sheets (GS) and reduced-graphene oxide (RGO) are promising fillers for preparing flexible polymer-based nanodielectric materials because of their unique two-dimensional structure and excellent electrical and mechanical properties. However, the easy aggregation of GS/RGO significantly limits the potential of graphene in enhancing the dielectric constant of polymer composites. In addition, the poor filler/matrix nanoscale interfacial adhesion also causes difficulties in suppressing the dielectric loss of the composites. In this work, using a facile and environmentally friendly approach, polydopamine coated RGO (PDA-RGO) and fluoro-polymer functionalized RGO (PF-PDA-RGO) were prepared. Compared with the RGO prepared by the conventional methods [i.e. hydrazine reduced-graphene oxide (H-RGO)] and PDA-RGO, the resulting PF-PDA-RGO nanosheets exhibit excellent dispersion in the ferroelectric polymer matrix [i.e. poly(vinylidene fluoride-co-hexafluoro propylene), P(VDF-HFP)] and strong interfacial adhesion with the matrix, leading to a low percolation threshold (fc = 1.06 vol%) and excellent flexibility for the corresponding nanocomposites. Among the three nanocomposites, the P(VDF-HFP)/PF-PDA-RGO nanocomposites exhibited the optimum performance (i.e. simultaneously having high dielectric constant and low dielectric loss). For instance, at 1000 Hz, the P(VDF-HFP) nanocomposite sample with 1.0 vol% PF-PDA-RGO has a dielectric constant of 107.9 and a dielectric loss of 0.070, showing good potential for dielectric applications. Our strategy provides a new pathway to prepare high performance flexible nanodielectric materials.

  7. Potentiometric and spectrophotometric study of the stability of magnesium carbonate and bicarbonate ion pairs to 150 °C and aqueous inorganic carbon speciation and magnesite solubility

    NASA Astrophysics Data System (ADS)

    Stefánsson, Andri; Bénézeth, Pascale; Schott, Jacques

    2014-08-01

    The formation constants of magnesium bicarbonate and carbonate ion pairs have been experimentally determined in dilute hydrothermal solutions to 150 °C. Two experimental approaches were applied, potentiometric acid-base titrations at 10-60 °C and spectrophotometric pH measurements using two pH indicators, 2-naphthol and 4-nitrophenol, at 25 and 80-150 °C. At a given temperature, the first and second ionization constants of carbonic acid (K1, K2) and the ion pair formation constants for MgHCO3+(aq) (KMgHCO3+) and MgCO3(aq) (KMgCO3) were simultaneously fitted to the data. Results of this study compare well with previously determined values of K1 and K2. The formation constants of MgHCO3+(aq) and MgCO3(aq) ion pairs increased significantly with increasing temperature, with values of logKMgHCO3+ = 1.14 and 1.75 and of logKMgCO3 = 2.86 and 3.48 at 10 °C and 100 °C, respectively. These ion pairs are important aqueous species under neutral to alkaline conditions in moderately dilute to concentrated Mg-containing solutions, with MgCO3(aq) predominating over CO32-(aq) in solutions at pH >8. The predominance of magnesium carbonate over carbonate is dependent on the concentration of dissolved magnesium and the ratio of magnesium over carbonate. With increasing temperature and at alkaline pH, brucite solubility further reduced the magnesium concentration to levels below 1 mmol kg-1, thus limiting availability of Mg2+(aq) for magnesite precipitation.

  8. Double proton transfer in the complex of acetic acid with methanol: Theory versus experiment

    NASA Astrophysics Data System (ADS)

    Fernández-Ramos, Antonio; Smedarchina, Zorka; Rodríguez-Otero, Jesús

    2001-01-01

    To test the approximate instanton approach to intermolecular proton-transfer dynamics, we report multidimensional ab initio bimolecular rate constants of HH, HD, and DD exchange in the complex of acetic acid with methanol in tetrahydrofuran-d8, and compare them with the NMR (nuclear magnetic resonance) experiments of Gerritzen and Limbach. The bimolecular rate constants are evaluated as products of the exchange rates and the equilibrium rate constants of complex formation in solution. The two molecules form hydrogen-bond bridges and the exchange occurs via concerted transfer of two protons. The dynamics of this transfer is evaluated in the complete space of 36 vibrational degrees of freedom. The geometries of the two isolated molecules, the complex, and the transition states corresponding to double proton transfer are fully optimized at QCISD (quadratic configuration interaction including single and double substitutions) level of theory, and the normal-mode frequencies are calculated at MP2 (Møller-Plesset perturbation theory of second order) level with the 6-31G (d,p) basis set. The presence of the solvent is taken into account via single-point calculations over the gas phase geometries with the PCM (polarized continuum model). The proton exchange rate constants, calculated with the instanton method, show the effect of the structure and strength of the hydrogen bonds, reflected in the coupling between the tunneling motion and the other vibrations of the complex. Comparison with experiment, which shows substantial kinetic isotopic effects (KIE), indicates that tunneling prevails over classic exchange for the whole temperature range of observation. The unusual behavior of the experimental KIE upon single and double deuterium substitution is well reproduced and is related to the synchronicity of two-atom tunneling.

  9. Analysis of cholera toxin-ganglioside interactions by flow cytometry.

    PubMed

    Lauer, Sabine; Goldstein, Byron; Nolan, Rhiannon L; Nolan, John P

    2002-02-12

    Cholera toxin entry into mammalian cells is mediated by binding of the pentameric B subunit (CTB) to ganglioside GM(1) in the cell membrane. We used flow cytometry to quantitatively measure in real time the interactions of fluorescently labeled pentameric cholera toxin B-subunit (FITC-CTB) with its ganglioside receptor on microsphere-supported phospholipid membranes. A model that describes the multiple steps of this mode of recognition was developed to guide our flow cytometric experiments and extract relevant equilibrium and kinetic rate constants. In contrast to previous studies, our approach takes into account receptor cross-linking, an important feature for multivalent interactions. From equilibrium measurements, we determined an equilibrium binding constant for a single subunit of FITC-CTB binding monovalently to GM(1) presented in bilayers of approximately 8 x 10(7) M(-1) while that for binding to soluble GM(1)-pentasaccharide was found to be approximately 4 x 10(6) M(-1). From kinetic measurements, we determined the rate constant for dissociation of a single site of FITC-CTB from microsphere-supported bilayers to be (3.21 +/- 0.03) x 10(-3) s(-1), and the rate of association of a site on FITC-CTB in solution to a GM(1) in the bilayer to be (2.8 +/- 0.4) x 10(4) M(-1) s(-1). These values yield a lower estimate for the equilibrium binding constant of approximately 1 x 10(7) M(-1). We determined the equilibrium surface cross-linking constant [(1.1 +/- 0.1) x 10(-12) cm(2)] and from this value and the value for the rate constant for dissociation derived a value of approximately 3.5 x 10(-15) cm(2) s(-1) for the forward rate constant for cross-linking. We also compared the interaction of the receptor binding B-subunit with that of the whole toxin (A- and B-subunits). Our results show that the whole toxin binds with approximately 100-fold higher avidity than the pentameric B-subunit alone which is most likely due to the additional interaction of the A(2)-subunit with the membrane surface. Interaction of cholera toxin B-subunit and whole cholera toxin with gangliosides other than GM(1) revealed specific binding only to GD1(b) and asialo-GM(1). These interactions, however, are marked by low avidity and require high receptor concentrations to be observed.

  10. Comparative Effectiveness Research in Oncology

    PubMed Central

    2013-01-01

    Although randomized controlled trials represent the gold standard for comparative effective research (CER), a number of additional methods are available when randomized controlled trials are lacking or inconclusive because of the limitations of such trials. In addition to more relevant, efficient, and generalizable trials, there is a need for additional approaches utilizing rigorous methodology while fully recognizing their inherent limitations. CER is an important construct for defining and summarizing evidence on effectiveness and safety and comparing the value of competing strategies so that patients, providers, and policymakers can be offered appropriate recommendations for optimal patient care. Nevertheless, methodological as well as political and social challenges for CER remain. CER requires constant and sophisticated methodological oversight of study design and analysis similar to that required for randomized trials to reduce the potential for bias. At the same time, if appropriately conducted, CER offers an opportunity to identify the most effective and safe approach to patient care. Despite rising and unsustainable increases in health care costs, an even greater challenge to the implementation of CER arises from the social and political environment questioning the very motives and goals of CER. Oncologists and oncology professional societies are uniquely positioned to provide informed clinical and methodological expertise to steer the appropriate application of CER toward critical discussions related to health care costs, cost-effectiveness, and the comparative value of the available options for appropriate care of patients with cancer. PMID:23697601

  11. Environmental and historical imprints on beta diversity: insights from variation in rates of species turnover along gradients

    PubMed Central

    Fitzpatrick, Matthew C.; Sanders, Nathan J.; Normand, Signe; Svenning, Jens-Christian; Ferrier, Simon; Gove, Aaron D.; Dunn, Robert R.

    2013-01-01

    A common approach for analysing geographical variation in biodiversity involves using linear models to determine the rate at which species similarity declines with geographical or environmental distance and comparing this rate among regions, taxa or communities. Implicit in this approach are weakly justified assumptions that the rate of species turnover remains constant along gradients and that this rate can therefore serve as a means to compare ecological systems. We use generalized dissimilarity modelling, a novel method that accommodates variation in rates of species turnover along gradients and between different gradients, to compare environmental and spatial controls on the floras of two regions with contrasting evolutionary and climatic histories: southwest Australia and northern Europe. We find stronger signals of climate history in the northern European flora and demonstrate that variation in rates of species turnover is persistent across regions, taxa and different gradients. Such variation may represent an important but often overlooked component of biodiversity that complicates comparisons of distance–decay relationships and underscores the importance of using methods that accommodate the curvilinear relationships expected when modelling beta diversity. Determining how rates of species turnover vary along and between gradients is relevant to understanding the sensitivity of ecological systems to environmental change. PMID:23926147

  12. Theoretical microwave spectral constants for C2N, C2N/+/, and C3H

    NASA Technical Reports Server (NTRS)

    Green, S.

    1980-01-01

    Theoretical microwave spectral constants have been computed for C2N, C3H, and C2N(+). For C2N these are compared with values obtained from optical data. Calculated hyperfine constants are also presented for HNC, DNC, and HCNH(+). The possibility of observing these species in dense interstellar clouds is discussed.

  13. New constraints on time-dependent variations of fundamental constants using Planck data

    NASA Astrophysics Data System (ADS)

    Hart, Luke; Chluba, Jens

    2018-02-01

    Observations of the cosmic microwave background (CMB) today allow us to answer detailed questions about the properties of our Universe, targeting both standard and non-standard physics. In this paper, we study the effects of varying fundamental constants (i.e. the fine-structure constant, αEM, and electron rest mass, me) around last scattering using the recombination codes COSMOREC and RECFAST++. We approach the problem in a pedagogical manner, illustrating the importance of various effects on the free electron fraction, Thomson visibility function and CMB power spectra, highlighting various degeneracies. We demonstrate that the simpler RECFAST++ treatment (based on a three-level atom approach) can be used to accurately represent the full computation of COSMOREC. We also include explicit time-dependent variations using a phenomenological power-law description. We reproduce previous Planck 2013 results in our analysis. Assuming constant variations relative to the standard values, we find the improved constraints αEM/αEM, 0 = 0.9993 ± 0.0025 (CMB only) and me/me, 0 = 1.0039 ± 0.0074 (including BAO) using Planck 2015 data. For a redshift-dependent variation, αEM(z) = αEM(z0) [(1 + z)/1100]p with αEM(z0) ≡ αEM, 0 at z0 = 1100, we obtain p = 0.0008 ± 0.0025. Allowing simultaneous variations of αEM(z0) and p yields αEM(z0)/αEM, 0 = 0.9998 ± 0.0036 and p = 0.0006 ± 0.0036. We also discuss combined limits on αEM and me. Our analysis shows that existing data are not only sensitive to the value of the fundamental constants around recombination but also its first time derivative. This suggests that a wider class of varying fundamental constant models can be probed using the CMB.

  14. Quantification aspects of constant pressure (ultra) high pressure liquid chromatography using mass-sensitive detectors with a nebulizing interface.

    PubMed

    Verstraeten, M; Broeckhoven, K; Lynen, F; Choikhet, K; Landt, K; Dittmann, M; Witt, K; Sandra, P; Desmet, G

    2013-01-25

    The present contribution investigates the quantitation aspects of mass-sensitive detectors with nebulizing interface (ESI-MSD, ELSD, CAD) in the constant pressure gradient elution mode. In this operation mode, the pressure is controlled and maintained at a set value and the liquid flow rate will vary according to the inverse mobile phase viscosity. As the pressure is continuously kept at the allowable maximum during the entire gradient run, the average liquid flow rate is higher compared to that in the conventional constant flow rate operation mode, thus shortening the analysis time. The following three mass-sensitive detectors were investigated: mass spectrometry detector (MS), evaporative light scattering detector (ELSD) and charged aerosol detector (CAD) and a wide variety of samples (phenones, polyaromatic hydrocarbons, wine, cocoa butter) has been considered. It was found that the nebulizing efficiency of the LC-interfaces of the three detectors under consideration changes with the increasing liquid flow rate. For the MS, the increasing flow rate leads to a lower peak area whereas for the ELSD the peak area increases compared to the constant flow rate mode. The peak area obtained with a CAD is rather insensitive to the liquid flow rate. The reproducibility of the peak area remains similar in both modes, although variation in system permeability compromises the 'long-term' reproducibility. This problem can however be overcome by running a flow rate program with an optimized flow rate and composition profile obtained from the constant pressure mode. In this case, the quantification remains reproducibile, despite any occuring variations of the system permeability. Furthermore, the same fragmentation pattern (MS) has been found in the constant pressure mode compared to the customary constant flow rate mode. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Comparing 2-nt 3' overhangs against blunt-ended siRNAs: a systems biology based study.

    PubMed

    Ghosh, Preetam; Dullea, Robert; Fischer, James E; Turi, Tom G; Sarver, Ronald W; Zhang, Chaoyang; Basu, Kalyan; Das, Sajal K; Poland, Bradley W

    2009-07-07

    In this study, we formulate a computational reaction model following a chemical kinetic theory approach to predict the binding rate constant for the siRNA-RISC complex formation reaction. The model allowed us to study the potency difference between 2-nt 3' overhangs against blunt-ended siRNA molecules in an RNA interference (RNAi) system. The rate constant predicted by this model was fed into a stochastic simulation of the RNAi system (using the Gillespie stochastic simulator) to study the overall potency effect. We observed that the stochasticity in the transcription/translation machinery has no observable effects in the RNAi pathway. Sustained gene silencing using siRNAs can be achieved only if there is a way to replenish the dsRNA molecules in the cell. Initial findings show about 1.5 times more blunt-ended molecules will be required to keep the mRNA at the same reduced level compared to the 2-nt overhang siRNAs. However, the mRNA levels jump back to saturation after a longer time when blunt-ended siRNAs are used. We found that the siRNA-RISC complex formation reaction rate was 2 times slower when blunt-ended molecules were used pointing to the fact that the presence of the 2-nt overhangs has a greater effect on the reaction in which the bound RISC complex cleaves the mRNA.

  16. Comparing 2-nt 3' overhangs against blunt-ended siRNAs: a systems biology based study

    PubMed Central

    Ghosh, Preetam; Dullea, Robert; Fischer, James E; Turi, Tom G; Sarver, Ronald W; Zhang, Chaoyang; Basu, Kalyan; Das, Sajal K; Poland, Bradley W

    2009-01-01

    In this study, we formulate a computational reaction model following a chemical kinetic theory approach to predict the binding rate constant for the siRNA-RISC complex formation reaction. The model allowed us to study the potency difference between 2-nt 3' overhangs against blunt-ended siRNA molecules in an RNA interference (RNAi) system. The rate constant predicted by this model was fed into a stochastic simulation of the RNAi system (using the Gillespie stochastic simulator) to study the overall potency effect. We observed that the stochasticity in the transcription/translation machinery has no observable effects in the RNAi pathway. Sustained gene silencing using siRNAs can be achieved only if there is a way to replenish the dsRNA molecules in the cell. Initial findings show about 1.5 times more blunt-ended molecules will be required to keep the mRNA at the same reduced level compared to the 2-nt overhang siRNAs. However, the mRNA levels jump back to saturation after a longer time when blunt-ended siRNAs are used. We found that the siRNA-RISC complex formation reaction rate was 2 times slower when blunt-ended molecules were used pointing to the fact that the presence of the 2-nt overhangs has a greater effect on the reaction in which the bound RISC complex cleaves the mRNA. PMID:19594876

  17. Diverse trends of electron correlation effects for properties with different radial and angular factors in an atomic system: a case study in Ca+

    NASA Astrophysics Data System (ADS)

    Kumar, Pradeep; Li, Cheng-Bin; Sahoo, B. K.

    2018-03-01

    Dependencies of electron correlation effects with the rank and radial behavior of spectroscopic properties are analyzed in the singly charged calcium ion (Ca+). To demonstrate these trends, we have determined field shift constants, magnetic dipole and electric quadrupole hyperfine structure constants, Landé g J factors, and electric quadrupole moments that are described by electronic operators with different radial and angular factors. Radial dependencies are investigated by comparing correlation trends among the properties that have similar angular factors and vice versa. To highlight these observations, we present results from the mean-field approach to all-orders along with intermediate contributions. Contributions from higher relativistic corrections are also given. These findings suggest that sometime lower-order approximations can give results agreeing with the experimental results, but inclusion of some of higher-order correlation effects can cause large disagreement with the experimental values. Therefore, validity of a method for accurate evaluation of atomic properties can be tested by performing calculations of several properties simultaneously that have diverse dependencies on the angular and radial factors and comparing with the available experimental results. Nevertheless, it is imperative to include full triple and quadrupole excitations in the all-order many-body methods for high-precision calculations that are yet to be developed adopting spherical coordinate system for atomic studies.

  18. On Atwood's Machine with a Nonzero Mass String

    NASA Astrophysics Data System (ADS)

    Tarnopolski, Mariusz

    2015-11-01

    Let us consider a classical high school exercise concerning two weights on a pulley and a string, illustrated in Fig. 1(a). A system like this is called an Atwood's machine and was invented by George Atwood in 1784 as a laboratory experiment to verify the mechanical laws of motion with constant acceleration. Nowadays, Atwood's machine is used for didactic purposes to demonstrate uniformly accelerated motion with acceleration arbitrarily smaller than the gravitational acceleration g. The simplest case is with a massless and frictionless pulley and a massless string. With little effort one can include the mass of the pulley in calculations. The mass of a string has been incorporated previously in some considerations and experiments. These include treatments focusing on friction, justifying the assumption of a massless string, incorporating variations in Earth's gravitational field, comparing the calculated value of g based on a simple experiment, taking the mass of the string into account in such a way that the resulting acceleration is constant, or in one exception solely focusing on a heavy string, but with a slightly different approach. Here we wish to provide i) a derivation of the acceleration and position dependence on the weights' masses based purely on basic dynamical reasoning similar to the conventional version of the exercise, and ii) focus on the influence of the string's linear density, or equivalently its mass, on the outcome compared to a massless string case.

  19. Calculation of nuclear spin-spin coupling constants using frozen density embedding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Götz, Andreas W., E-mail: agoetz@sdsc.edu; Autschbach, Jochen; Visscher, Lucas, E-mail: visscher@chem.vu.nl

    2014-03-14

    We present a method for a subsystem-based calculation of indirect nuclear spin-spin coupling tensors within the framework of current-spin-density-functional theory. Our approach is based on the frozen-density embedding scheme within density-functional theory and extends a previously reported subsystem-based approach for the calculation of nuclear magnetic resonance shielding tensors to magnetic fields which couple not only to orbital but also spin degrees of freedom. This leads to a formulation in which the electron density, the induced paramagnetic current, and the induced spin-magnetization density are calculated separately for the individual subsystems. This is particularly useful for the inclusion of environmental effects inmore » the calculation of nuclear spin-spin coupling constants. Neglecting the induced paramagnetic current and spin-magnetization density in the environment due to the magnetic moments of the coupled nuclei leads to a very efficient method in which the computationally expensive response calculation has to be performed only for the subsystem of interest. We show that this approach leads to very good results for the calculation of solvent-induced shifts of nuclear spin-spin coupling constants in hydrogen-bonded systems. Also for systems with stronger interactions, frozen-density embedding performs remarkably well, given the approximate nature of currently available functionals for the non-additive kinetic energy. As an example we show results for methylmercury halides which exhibit an exceptionally large shift of the one-bond coupling constants between {sup 199}Hg and {sup 13}C upon coordination of dimethylsulfoxide solvent molecules.« less

  20. New analysis strategies for micro aspheric lens metrology

    NASA Astrophysics Data System (ADS)

    Gugsa, Solomon Abebe

    Effective characterization of an aspheric micro lens is critical for understanding and improving processing in micro-optic manufacturing. Since most microlenses are plano-convex, where the convex geometry is a conic surface, current practice is often limited to obtaining an estimate of the lens conic constant, which average out the surface geometry that departs from an exact conic surface and any addition surface irregularities. We have developed a comprehensive approach of estimating the best fit conic and its uncertainty, and in addition propose an alternative analysis that focuses on surface errors rather than best-fit conic constant. We describe our new analysis strategy based on the two most dominant micro lens metrology methods in use today, namely, scanning white light interferometry (SWLI) and phase shifting interferometry (PSI). We estimate several parameters from the measurement. The major uncertainty contributors for SWLI are the estimates of base radius of curvature, the aperture of the lens, the sag of the lens, noise in the measurement, and the center of the lens. In the case of PSI the dominant uncertainty contributors are noise in the measurement, the radius of curvature, and the aperture. Our best-fit conic procedure uses least squares minimization to extract a best-fit conic value, which is then subjected to a Monte Carlo analysis to capture combined uncertainty. In our surface errors analysis procedure, we consider the surface errors as the difference between the measured geometry and the best-fit conic surface or as the difference between the measured geometry and the design specification for the lens. We focus on a Zernike polynomial description of the surface error, and again a Monte Carlo analysis is used to estimate a combined uncertainty, which in this case is an uncertainty for each Zernike coefficient. Our approach also allows us to investigate the effect of individual uncertainty parameters and measurement noise on both the best-fit conic constant analysis and the surface errors analysis, and compare the individual contributions to the overall uncertainty.

  1. Validating Whole-Airway CFD Predictions of DPI Aerosol Deposition at Multiple Flow Rates.

    PubMed

    Longest, P Worth; Tian, Geng; Khajeh-Hosseini-Dalasm, Navvab; Hindle, Michael

    2016-12-01

    The objective of this study was to compare aerosol deposition predictions of a new whole-airway CFD model with available in vivo data for a dry powder inhaler (DPI) considered across multiple inhalation waveforms, which affect both the particle size distribution (PSD) and particle deposition. The Novolizer DPI with a budesonide formulation was selected based on the availability of 2D gamma scintigraphy data in humans for three different well-defined inhalation waveforms. Initial in vitro cascade impaction experiments were conducted at multiple constant (square-wave) particle sizing flow rates to characterize PSDs. The whole-airway CFD modeling approach implemented the experimentally determined PSDs at the point of aerosol formation in the inhaler. Complete characteristic airway geometries for an adult were evaluated through the lobar bronchi, followed by stochastic individual pathway (SIP) approximations through the tracheobronchial region and new acinar moving wall models of the alveolar region. It was determined that the PSD used for each inhalation waveform should be based on a constant particle sizing flow rate equal to the average of the inhalation waveform's peak inspiratory flow rate (PIFR) and mean flow rate [i.e., AVG(PIFR, Mean)]. Using this technique, agreement with the in vivo data was acceptable with <15% relative differences averaged across the three regions considered for all inhalation waveforms. Defining a peripheral to central deposition ratio (P/C) based on alveolar and tracheobronchial compartments, respectively, large flow-rate-dependent differences were observed, which were not evident in the original 2D in vivo data. The agreement between the CFD predictions and in vivo data was dependent on accurate initial estimates of the PSD, emphasizing the need for a combination in vitro-in silico approach. Furthermore, use of the AVG(PIFR, Mean) value was identified as a potentially useful method for characterizing a DPI aerosol at a constant flow rate.

  2. Validating Whole-Airway CFD Predictions of DPI Aerosol Deposition at Multiple Flow Rates

    PubMed Central

    Tian, Geng; Khajeh-Hosseini-Dalasm, Navvab; Hindle, Michael

    2016-01-01

    Abstract Background: The objective of this study was to compare aerosol deposition predictions of a new whole-airway CFD model with available in vivo data for a dry powder inhaler (DPI) considered across multiple inhalation waveforms, which affect both the particle size distribution (PSD) and particle deposition. Methods: The Novolizer DPI with a budesonide formulation was selected based on the availability of 2D gamma scintigraphy data in humans for three different well-defined inhalation waveforms. Initial in vitro cascade impaction experiments were conducted at multiple constant (square-wave) particle sizing flow rates to characterize PSDs. The whole-airway CFD modeling approach implemented the experimentally determined PSDs at the point of aerosol formation in the inhaler. Complete characteristic airway geometries for an adult were evaluated through the lobar bronchi, followed by stochastic individual pathway (SIP) approximations through the tracheobronchial region and new acinar moving wall models of the alveolar region. Results: It was determined that the PSD used for each inhalation waveform should be based on a constant particle sizing flow rate equal to the average of the inhalation waveform's peak inspiratory flow rate (PIFR) and mean flow rate [i.e., AVG(PIFR, Mean)]. Using this technique, agreement with the in vivo data was acceptable with <15% relative differences averaged across the three regions considered for all inhalation waveforms. Defining a peripheral to central deposition ratio (P/C) based on alveolar and tracheobronchial compartments, respectively, large flow-rate-dependent differences were observed, which were not evident in the original 2D in vivo data. Conclusions: The agreement between the CFD predictions and in vivo data was dependent on accurate initial estimates of the PSD, emphasizing the need for a combination in vitro–in silico approach. Furthermore, use of the AVG(PIFR, Mean) value was identified as a potentially useful method for characterizing a DPI aerosol at a constant flow rate. PMID:27082824

  3. Biomolecular Interaction Analysis Using an Optical Surface Plasmon Resonance Biosensor: The Marquardt Algorithm vs Newton Iteration Algorithm

    PubMed Central

    Hu, Jiandong; Ma, Liuzheng; Wang, Shun; Yang, Jianming; Chang, Keke; Hu, Xinran; Sun, Xiaohui; Chen, Ruipeng; Jiang, Min; Zhu, Juanhua; Zhao, Yuanyuan

    2015-01-01

    Kinetic analysis of biomolecular interactions are powerfully used to quantify the binding kinetic constants for the determination of a complex formed or dissociated within a given time span. Surface plasmon resonance biosensors provide an essential approach in the analysis of the biomolecular interactions including the interaction process of antigen-antibody and receptors-ligand. The binding affinity of the antibody to the antigen (or the receptor to the ligand) reflects the biological activities of the control antibodies (or receptors) and the corresponding immune signal responses in the pathologic process. Moreover, both the association rate and dissociation rate of the receptor to ligand are the substantial parameters for the study of signal transmission between cells. A number of experimental data may lead to complicated real-time curves that do not fit well to the kinetic model. This paper presented an analysis approach of biomolecular interactions established by utilizing the Marquardt algorithm. This algorithm was intensively considered to implement in the homemade bioanalyzer to perform the nonlinear curve-fitting of the association and disassociation process of the receptor to ligand. Compared with the results from the Newton iteration algorithm, it shows that the Marquardt algorithm does not only reduce the dependence of the initial value to avoid the divergence but also can greatly reduce the iterative regression times. The association and dissociation rate constants, ka, kd and the affinity parameters for the biomolecular interaction, KA, KD, were experimentally obtained 6.969×105 mL·g-1·s-1, 0.00073 s-1, 9.5466×108 mL·g-1 and 1.0475×10-9 g·mL-1, respectively from the injection of the HBsAg solution with the concentration of 16ng·mL-1. The kinetic constants were evaluated distinctly by using the obtained data from the curve-fitting results. PMID:26147997

  4. Organic solar cells based on high dielectric constant materials: An approach to increase efficiency

    NASA Astrophysics Data System (ADS)

    Hamam, Khalil Jumah Tawfiq

    The efficiency of organic solar cells still lags behind inorganic solar cells due to their low dielectric constant which results in a weakly screened columbic attraction between the photogenerated electron-hole system, therefore the probability of charge separating is low. Having an organic material with a high dielectric constant could be the solution to get separated charges or at least weakly bounded electron-hole pairs. Therefore, high dielectric constant materials have been investigated and studied by measuring modified metal-phthalocyanine (MePc) and polyaniline in pellets and thin films. The dielectric constant was investigated as a function of temperature and frequency in the range of 20Hz to1MHz. For MePc we found that the high dielectric constant was an extrinsic property due to water absorption and the formation of hydronuim ion allowed by the ionization of the functional groups such as sulphonated and carboxylic groups. The dielectric constant was high at low frequencies and decreasing as the frequency increase. Investigated materials were applied in fabricated bilayer heterojunction organic solar cells. The application of these materials in an organic solar cells show a significant stability under room conditions rather than improvement in their efficiency.

  5. PEDOT-CNT coated electrodes stimulate retinal neurons at low voltage amplitudes and low charge densities

    NASA Astrophysics Data System (ADS)

    Samba, R.; Herrmann, T.; Zeck, G.

    2015-02-01

    Objective. The aim of this study was to compare two different microelectrode materials—the conductive polymer composite poly-3,4-ethylenedioxythiophene (PEDOT)-carbon nanotube(CNT) and titanium nitride (TiN)—at activating spikes in retinal ganglion cells in whole mount rat retina through stimulation of the local retinal network. Stimulation efficacy of the microelectrodes was analyzed by comparing voltage, current and transferred charge at stimulation threshold. Approach. Retinal ganglion cell spikes were recorded by a central electrode (30 μm diameter) in the planar grid of an electrode array. Extracellular stimulation (monophasic, cathodic, 0.1-1.0 ms) of the retinal network was performed using constant voltage pulses applied to the eight surrounding electrodes. The stimulation electrodes were equally spaced on the four sides of a square (400 × 400 μm). Threshold voltage was determined as the pulse amplitude required to evoke network-mediated ganglion cell spiking in a defined post stimulus time window in 50% of identical stimulus repetitions. For the two electrode materials threshold voltage, transferred charge at threshold, maximum current and the residual current at the end of the pulse were compared. Main results. Stimulation of retinal interneurons using PEDOT-CNT electrodes is achieved with lower stimulation voltage and requires lower charge transfer as compared to TiN. The key parameter for effective stimulation is a constant current over at least 0.5 ms, which is obtained by PEDOT-CNT electrodes at lower stimulation voltage due to its faradaic charge transfer mechanism. Significance. In neuroprosthetic implants, PEDOT-CNT may allow for smaller electrodes, effective stimulation in a safe voltage regime and lower energy-consumption. Our study also indicates, that the charge transferred at threshold or the charge injection capacity per se does not determine stimulation efficacy.

  6. Energy absorption by a magnetic nanoparticle suspension in a rotating field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raikher, Yu. L.; Stepanov, V. I., E-mail: stepanov@icmm.ru

    Heat generation by viscous dissipation in a dilute suspension of single-domain ferromagnetic particles in a rotating magnetic field is analyzed by assuming that the suspended particles have a high magnetic rigidity. The problem is solved by using a kinetic approach based on a rotational diffusion equation. Behavior of specific loss power (SLP) as a function of field strength H and frequency {omega} is examined at constant temperature. SLP increases as either of these parameters squared when the other is constant, eventually approaching a saturation value. The function SLP(H, {omega}) can be used to determine optimal and admissible ranges of magneticallymore » induced heating.« less

  7. Nomadic concepts in the history of biology.

    PubMed

    Surman, Jan; Stráner, Katalin; Haslinger, Peter

    2014-12-01

    The history of scientific concepts has firmly settled among the instruments of historical inquiry. In our section we approach concepts from the perspective of nomadic concepts (Isabelle Stengers). Instead of following the evolution of concepts within one disciplinary network, we see them as subject to constant reification and change while crossing and turning across disciplines and non-scientific domains. This introduction argues that understanding modern biology is not possible without taking into account the constant transfers and translations that affected concepts. We argue that this approach does not only engage with nomadism between disciplines and non-scientific domains, but reflects on and involves the metaphoric value of concepts as well. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Mechanics of Constriction during Cell Division: A Variational Approach

    PubMed Central

    Almendro-Vedia, Victor G.; Monroy, Francisco; Cao, Francisco J.

    2013-01-01

    During symmetric division cells undergo large constriction deformations at a stable midcell site. Using a variational approach, we investigate the mechanical route for symmetric constriction by computing the bending energy of deformed vesicles with rotational symmetry. Forces required for constriction are explicitly computed at constant area and constant volume, and their values are found to be determined by cell size and bending modulus. For cell-sized vesicles, considering typical bending modulus of , we calculate constriction forces in the range . The instability of symmetrical constriction is shown and quantified with a characteristic coefficient of the order of , thus evidencing that cells need a robust mechanism to stabilize constriction at midcell. PMID:23990888

  9. [Perception of approaching and withdrawing sound sources following exposure to broadband noise. The effect of spatial domain].

    PubMed

    Malinina, E S

    2014-01-01

    The spatial specificity of auditory aftereffect was studied after a short-time adaptation (5 s) to the broadband noise (20-20000 Hz). Adapting stimuli were sequences of noise impulses with the constant amplitude, test stimuli--with the constant and changing amplitude: an increase of amplitude of impulses in sequence was perceived by listeners as approach of the sound source, while a decrease of amplitude--as its withdrawal. The experiments were performed in an anechoic chamber. The auditory aftereffect was estimated under the following conditions: the adapting and test stimuli were presented from the loudspeaker located at a distance of 1.1 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively far spatial domain); the adapting and test stimuli were presented from different distances. The obtained data showed that perception of the imitated movement of the sound source in both spatial domains had the common characteristic peculiarities that manifested themselves both under control conditions without adaptation and after adaptation to noise. In the absence of adaptation for both distances, an asymmetry of psychophysical curves was observed: the listeners estimated the test stimuli more often as approaching. The overestimation by listeners of test stimuli as the approaching ones was more pronounced at their presentation from the distance of 1.1 m, i. e., from the subjectively near spatial domain. After adaptation to noise the aftereffects showed spatial specificity in both spatial domains: they were observed only at the spatial coincidence of adapting and test stimuli and were absent at their separation. The aftereffects observed in two spatial domains were similar in direction and value: the listeners estimated the test stimuli more often as withdrawing as compared to control. The result of such aftereffect was restoration of the symmetry of psychometric curves and of the equiprobable estimation of direction of movement of test signals.

  10. Measurement of Absolute Concentrations of Individual Compounds in Metabolite Mixtures by Gradient-Selective Time-Zero 1H-13C HSQC (gsHSQC0) with Two Concentration References and Fast Maximum Likelihood Reconstruction Analysis

    PubMed Central

    Hu, Kaifeng; Ellinger, James J.; Chylla, Roger A.; Markley, John L.

    2011-01-01

    Time-zero 2D 13C HSQC (HSQC0) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC0 spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero 1H-13C HSQC0 in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant time mode. Semi-automatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semi-automated gsHSQC0 with those obtained by the original manual phase-cycled HSQC0 approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture. PMID:22029275

  11. NMR shielding and spin–rotation constants of {sup 175}LuX (X = {sup 19}F, {sup 35}Cl, {sup 79}Br, {sup 127}I) molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demissie, Taye B.

    2015-12-31

    This presentation demonstrates the relativistic effects on the spin-rotation constants, absolute nuclear magnetic resonance (NMR) shielding constants and shielding spans of {sup 175}LuX (X = {sup 19}F, {sup 35}Cl, {sup 79}Br, {sup 127}I) molecules. The results are obtained from calculations performed using density functional theory (non-relativistic and four-component relativistic) and coupled-cluster calculations. The spin-rotation constants are compared with available experimental values. In most of the molecules studied, relativistic effects make an order of magnitude difference on the NMR absolute shielding constants.

  12. Pseudo-extravasation rate constant of dynamic susceptibility contrast-MRI determined from pharmacokinetic first principles.

    PubMed

    Li, Xin; Varallyay, Csanad G; Gahramanov, Seymur; Fu, Rongwei; Rooney, William D; Neuwelt, Edward A

    2017-11-01

    Dynamic susceptibility contrast-magnetic resonance imaging (DSC-MRI) is widely used to obtain informative perfusion imaging biomarkers, such as the relative cerebral blood volume (rCBV). The related post-processing software packages for DSC-MRI are available from major MRI instrument manufacturers and third-party vendors. One unique aspect of DSC-MRI with low-molecular-weight gadolinium (Gd)-based contrast reagent (CR) is that CR molecules leak into the interstitium space and therefore confound the DSC signal detected. Several approaches to correct this leakage effect have been proposed throughout the years. Amongst the most popular is the Boxerman-Schmainda-Weisskoff (BSW) K 2 leakage correction approach, in which the K 2 pseudo-first-order rate constant quantifies the leakage. In this work, we propose a new method for the BSW leakage correction approach. Based on the pharmacokinetic interpretation of the data, the commonly adopted R 2 * expression accounting for contributions from both intravascular and extravasating CR components is transformed using a method mathematically similar to Gjedde-Patlak linearization. Then, the leakage rate constant (K L ) can be determined as the slope of the linear portion of a plot of the transformed data. Using the DSC data of high-molecular-weight (~750 kDa), iron-based, intravascular Ferumoxytol (FeO), the pharmacokinetic interpretation of the new paradigm is empirically validated. The primary objective of this work is to empirically demonstrate that a linear portion often exists in the graph of the transformed data. This linear portion provides a clear definition of the Gd CR pseudo-leakage rate constant, which equals the slope derived from the linear segment. A secondary objective is to demonstrate that transformed points from the initial transient period during the CR wash-in often deviate from the linear trend of the linearized graph. The inclusion of these points will have a negative impact on the accuracy of the leakage rate constant, and even make it time dependent. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Stress-stress fluctuation formula for elastic constants in the NPT ensemble

    NASA Astrophysics Data System (ADS)

    Lips, Dominik; Maass, Philipp

    2018-05-01

    Several fluctuation formulas are available for calculating elastic constants from equilibrium correlation functions in computer simulations, but the ones available for simulations at constant pressure exhibit slow convergence properties and cannot be used for the determination of local elastic constants. To overcome these drawbacks, we derive a stress-stress fluctuation formula in the NPT ensemble based on known expressions in the NVT ensemble. We validate the formula in the NPT ensemble by calculating elastic constants for the simple nearest-neighbor Lennard-Jones crystal and by comparing the results with those obtained in the NVT ensemble. For both local and bulk elastic constants we find an excellent agreement between the simulated data in the two ensembles. To demonstrate the usefulness of the formula, we apply it to determine the elastic constants of a simulated lipid bilayer.

  14. The oceanic boundary layer driven by wave breaking with stochastic variability. Part 1. Direct numerical simulations

    NASA Astrophysics Data System (ADS)

    Sullivan, Peter P.; McWilliams, James C.; Melville, W. Kendall

    2004-05-01

    We devise a stochastic model for the effects of breaking waves and fit its distribution functions to laboratory and field data. This is used to represent the space time structure of momentum and energy forcing of the oceanic boundary layer in turbulence-resolving simulations. The aptness of this breaker model is evaluated in a direct numerical simulation (DNS) of an otherwise quiescent fluid driven by an isolated breaking wave, and the results are in good agreement with laboratory measurements. The breaker model faithfully reproduces the bulk features of a breaking event: the mean kinetic energy decays at a rate approaching t(-1) , and a long-lived vortex (eddy) is generated close to the water surface. The long lifetime of this vortex (more than 50 wave periods) makes it effective in energizing the surface region of oceanic boundary layers. Next, a comparison of several different DNS of idealized oceanic boundary layers driven by different surface forcing (i.e. constant current (as in Couette flow), constant stress, or a mixture of constant stress plus stochastic breakers) elucidates the importance of intermittent stress transmission to the underlying currents. A small amount of active breaking, about 1.6% of the total water surface area at any instant in time, significantly alters the instantaneous flow patterns as well as the ensemble statistics. Near the water surface a vigorous downwelling upwelling pattern develops at the head and tail of each three-dimensional breaker. This enhances the vertical velocity variance and generates both negative- and positive-signed vertical momentum flux. Analysis of the mean velocity and scalar profiles shows that breaking effectively increases the surface roughness z_o by more than a factor of 30; for our simulations z_o/lambda {≈} 0.04 to 0.06, where lambda is the wavelength of the breaking wave. Compared to a flow driven by a constant current, the extra mixing from breakers increases the mean eddy viscosity by more than a factor of 10 near the water surface. Breaking waves alter the usual balance of production and dissipation in the turbulent kinetic energy (TKE) budget; turbulent and pressure transports and breaker work are important sources and sinks in the budget. We also show that turbulent boundary layers driven by constant current and constant stress (i.e. with no breaking) differ in fundamental ways. The additional freedom provided by a constant-stress boundary condition permits finite velocity variances at the water surface, so that flows driven by constant stress mimic flows with weakly and statistically homogeneous breaking waves.

  15. A Unique Technique to get Kaprekar Iteration in Linear Programming Problem

    NASA Astrophysics Data System (ADS)

    Sumathi, P.; Preethy, V.

    2018-04-01

    This paper explores about a frivolous number popularly known as Kaprekar constant and Kaprekar numbers. A large number of courses and the different classroom capacities with difference in study periods make the assignment between classrooms and courses complicated. An approach of getting the minimum value of number of iterations to reach the Kaprekar constant for four digit numbers and maximum value is also obtained through linear programming techniques.

  16. Formation of nitric acid hydrates - A chemical equilibrium approach

    NASA Technical Reports Server (NTRS)

    Smith, Roland H.

    1990-01-01

    Published data are used to calculate equilibrium constants for reactions of the formation of nitric acid hydrates over the temperature range 190 to 205 K. Standard enthalpies of formation and standard entropies are calculated for the tri- and mono-hydrates. These are shown to be in reasonable agreement with earlier calorimetric measurements. The formation of nitric acid trihydrate in the polar stratosphere is discussed in terms of these equilibrium constants.

  17. Theory of relativistic Brownian motion in the presence of electromagnetic field in (1+1) dimension

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Annesh; Bandyopadhyay, M.; Bhamidipati, C.

    2018-04-01

    In this work, we consider the relativistic generalization of the theory of Brownian motion for the (1+1) dimensional case, which is again consistent with Einstein's special theory of relativity and reduces to standard Brownian motion in the Newtonian limit. All the generalizations are made considering Special theory of relativity into account. The particle under consideration has a velocity close to the speed of light and is a free Brownian particle suspended in a heat bath. With this generalization the velocity probability density functions are also obtained using Ito, Stratonovich and Hanggi-Klimontovich approach of pre-point, mid-point and post-point discretization rule. Subsequently, in our work, we have obtained the relativistic Langevin equations in the presence of an electromagnetic field. Finally, taking a special case of a constant vector potential and a constant electric field into account the Langevin equations are solved for the momentum and subsequently the velocity of the particle. Using a similar approach to the Fokker-planck equations of motion, the velocity distributions are also obtained in the presence of a constant vector potential and are plotted, which shows essential deviations from the one obtained without a potential. Our constant potential model can be realized in an optical potential.

  18. Electric Machine with Boosted Inductance to Stabilize Current Control

    NASA Technical Reports Server (NTRS)

    Abel, Steve

    2013-01-01

    High-powered motors typically have very low resistance and inductance (R and L) in their windings. This makes the pulse-width modulated (PWM) control of the current very difficult, especially when the bus voltage (V) is high. These R and L values are dictated by the motor size, torque (Kt), and back-emf (Kb) constants. These constants are in turn set by the voltage and the actuation torque-speed requirements. This problem is often addressed by placing inductive chokes within the controller. This approach is undesirable in that space is taken and heat is added to the controller. By keeping the same motor frame, reducing the wire size, and placing a correspondingly larger number of turns in each slot, the resistance, inductance, torque constant, and back-emf constant are all increased. The increased inductance aids the current control but ruins the Kt and Kb selections. If, however, a fraction of the turns is moved from their "correct slot" to an "incorrect slot," the increased R and L values are retained, but the Kt and Kb values are restored to the desired values. This approach assumes that increased resistance is acceptable to a degree. In effect, the heat allocated to the added inductance has been moved from the controller to the motor body, which in some cases is preferred.

  19. Mechanical and Thermal Properties of Praseodymium Monopnictides: AN Ultrasonic Study

    NASA Astrophysics Data System (ADS)

    Bhalla, Vyoma; Kumar, Raj; Tripathy, Chinmayee; Singh, Devraj

    2013-09-01

    We have computed ultrasonic attenuation, acoustic coupling constants and ultrasonic velocities of praseodymium monopnictides PrX(X: N, P, As, Sb and Bi) along the <100>, <110>, <111> in the temperature range 100-500 K using higher order elastic constants. The higher order elastic constants are evaluated using Coulomb and Born-Mayer potential with two basic parameters viz. nearest-neighbor distance and hardness parameter in the temperature range of 0-500 K. Several other mechanical and thermal parameters like bulk modulus, shear modulus, Young's modulus, Poisson ratio, anisotropic ratio, tetragonal moduli, Breazeale's nonlinearity parameter and Debye temperature are also calculated. In the present study, the fracture/toughness (B/G) ratio is less than 1.75 which implies that PrX compounds are brittle in nature at room temperature. The chosen material fulfilled Born criterion of mechanical stability. We also found the deviation of Cauchy's relation at higher temperatures. PrN is most stable material as it has highest valued higher order elastic constants as well as the ultrasonic velocity. Further, the lattice thermal conductivity using modified approach of Slack and Berman is determined at room temperature. The ultrasonic attenuation due to phonon-phonon interaction and thermoelastic relaxation mechanisms have been computed using modified Mason's approach. The results with other well-known physical properties are useful for industrial applications.

  20. Role of endothelium sensitivity to shear stress in noradrenaline-induced constriction of feline femoral arterial bed under constant flow and constant pressure perfusions.

    PubMed

    Kartamyshev, Sergey P; Balashov, Sergey A; Melkumyants, Arthur M

    2007-01-01

    The effect of shear stress at the endothelium in the attenuation of the noradrenaline-induced constriction of the femoral vascular bed perfused at a constant blood flow was investigated in 16 anesthetized cats. It is known that the adrenergic vasoconstriction of the femoral vascular bed is considerably greater at a constant pressure perfusion than at a constant blood flow. This difference may depend on the ability of the endothelium to relax smooth muscle in response to an increase in wall shear stress. Since the shear stress is directly related to the blood flow and inversely related to the third power of vessel diameter, vasoconstriction at a constant blood flow increases the wall shear stress that is the stimulus for smooth muscle relaxation opposing constriction. On the other hand, at a constant perfusion pressure, vasoconstriction is accompanied by a decrease in flow rate, which prevents a wall shear stress increase. To reveal the effect of endothelial sensitivity to shear stress, we compared noradrenaline-induced changes in total and proximal arterial resistances during perfusion of the hind limb at a constant blood flow and at a constant pressure in vessels with intact and injured endothelium. We found that in the endothelium-intact bed the same concentration of noradrenaline at a constant flow caused an increase in overall vascular peripheral resistance that was half as large as at a constant perfusion pressure. This difference is mainly confined to the proximal arterial vessels (arteries and large arterioles) whose resistance at a constant flow increased only 0.19 +/- 0.03 times compared to that at a constant pressure. The removal of the endothelium only slightly increased constrictor responses at the perfusion under a constant pressure (noradrenaline-induced increases of both overall and proximal arterial resistance augmented by 12%), while the responses of the proximal vessels at a constant flow became 4.7 +/- 0.4 times greater than in the endothelium-intact bed. A selective blockage of endothelium sensitivity to shear stress using a glutaraldehyde dimer augmented the constrictor responses of the proximal vessels at a constant flow 4.6-fold (+/-0.3), but had no significant effect on the responses at a constant pressure. These results are consistent with the conclusion that the difference in constrictor responses at constant flow and pressure perfusions depends mainly on the smooth muscle relaxation caused by increased wall shear stress. Copyright (c) 2007 S. Karger AG, Basel.

  1. Comparative assessment of smallholder sustainability using an agricultural sustainability framework and a yield based index insurance: A case study

    NASA Astrophysics Data System (ADS)

    Moshtaghi, Mehrdad; Adla, Soham; Pande, Saket; Disse, Markus; Savenije, Hubert

    2017-04-01

    The concept of sustainability is central to smallholder agriculture as subsistence farming is constantly impacted by livelihood insecurity and is constrained by access to capital, water technology and alternative employment opportunities. This study compares two approaches which aim at quantifying smallholder sustainability but differ in their underlying principles, methodologies for assessment and reporting, and applications. The yield index based insurance can protect the smallholder agriculture and help it to more economic sustainability because the income of smallholder depends on selling crops and this insurance scheme is based on crop yields. In this research, the trigger of this insurance sets on the basis of yields in previous years. The crop yields are calculated every year through socio-hydrology modeling and smallholder can get indemnity when crop yields are lower than average of previous five years (a crop failure). The FAO Sustainability Assessment of Food and Agriculture (SAFA) is an inclusive and comprehensive framework for sustainability assessment in the food and agricultural sector. It follows the UN definition of the 4 dimensions of sustainability (good governance, environmental integrity, economic resilience and social well-being) and includes 21 themes and 58 sub-themes with a multi-indicator approach. The direct sustainability corresponding to the FAO SAFA economic resilience dimension is compared with the indirect notion of sustainability derived from the yield based index insurance. A semi-synthetic comparison is conducted to understand the differences in the underlying principles, methodologies and application of the two approaches. Both approaches are applied to data from smallholder regions of Marathwada in Maharashtra (India) which experienced a severe rise in farmer suicides in the 2000s which has been attributed to a combination of socio-hydrological factors.

  2. Evaluation of ultrasound based sterilization approaches in terms of shelf life and quality parameters of fruit and vegetable juices.

    PubMed

    Khandpur, Paramjeet; Gogate, Parag R

    2016-03-01

    The present work evaluates the performance of ultrasound based sterilization approaches for processing of different fruit and vegetable juices in terms of microbial growth and changes in the quality parameters during the storage. Comparison with the conventional thermal processing has also been presented. A novel approach based on combination of ultrasound with ultraviolet irradiation and crude extract of essential oil from orange peels has been used for the first time. Identification of the microbial growth (total bacteria and yeast content) in the juices during the subsequent storage and assessing the safety for human consumption along with the changes in the quality parameters (Brix, titratable acidity, pH, ORP, salt, conductivity, TSS and TDS) has been investigated in details. The optimized ultrasound parameters for juice sterilization were established as ultrasound power of 100 W and treatment time of 15 min for the constant frequency operation (20 kHz). It has been established that more than 5 log reduction was achieved using the novel combined approaches based on ultrasound. The treated juices using different approaches based on ultrasound also showed lower microbial growth and improved quality characteristics as compared to the thermally processed juice. Scale up studies were also performed using spinach juice as the test sample with processing at 5 L volume for the first time. The ultrasound treated juice satisfied the microbiological and physiochemical safety limits in refrigerated storage conditions for 20 days for the large scale processing. Overall the present work conclusively established the usefulness of combined treatment approaches based on ultrasound for maintaining the microbiological safety of beverages with enhanced shelf life and excellent quality parameters as compared to the untreated and thermally processed juices. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. A new look at the Feynman ‘hodograph’ approach to the Kepler first law

    NASA Astrophysics Data System (ADS)

    Cariñena, José F.; Rañada, Manuel F.; Santander, Mariano

    2016-03-01

    Hodographs for the Kepler problem are circles. This fact, known for almost two centuries, still provides the simplest path to derive the Kepler first law. Through Feynman’s ‘lost lecture’, this derivation has now reached a wider audience. Here we look again at Feynman’s approach to this problem, as well as the recently suggested modification by van Haandel and Heckman (vHH), with two aims in mind, both of which extend the scope of the approach. First we review the geometric constructions of the Feynman and vHH approaches (that prove the existence of elliptic orbits without making use of integral calculus or differential equations) and then extend the geometric approach to also cover the hyperbolic orbits (corresponding to E\\gt 0). In the second part we analyse the properties of the director circles of the conics, which are used to simplify the approach, and we relate with the properties of the hodographs and Laplace-Runge-Lenz vector the constant of motion specific to the Kepler problem. Finally, we briefly discuss the generalisation of the geometric method to the Kepler problem in configuration spaces of constant curvature, i.e. in the sphere and the hyperbolic plane.

  4. A novel method linking neural connectivity to behavioral fluctuations: Behavior-regressed connectivity.

    PubMed

    Passaro, Antony D; Vettel, Jean M; McDaniel, Jonathan; Lawhern, Vernon; Franaszczuk, Piotr J; Gordon, Stephen M

    2017-03-01

    During an experimental session, behavioral performance fluctuates, yet most neuroimaging analyses of functional connectivity derive a single connectivity pattern. These conventional connectivity approaches assume that since the underlying behavior of the task remains constant, the connectivity pattern is also constant. We introduce a novel method, behavior-regressed connectivity (BRC), to directly examine behavioral fluctuations within an experimental session and capture their relationship to changes in functional connectivity. This method employs the weighted phase lag index (WPLI) applied to a window of trials with a weighting function. Using two datasets, the BRC results are compared to conventional connectivity results during two time windows: the one second before stimulus onset to identify predictive relationships, and the one second after onset to capture task-dependent relationships. In both tasks, we replicate the expected results for the conventional connectivity analysis, and extend our understanding of the brain-behavior relationship using the BRC analysis, demonstrating subject-specific BRC maps that correspond to both positive and negative relationships with behavior. Comparison with Existing Method(s): Conventional connectivity analyses assume a consistent relationship between behaviors and functional connectivity, but the BRC method examines performance variability within an experimental session to understand dynamic connectivity and transient behavior. The BRC approach examines connectivity as it covaries with behavior to complement the knowledge of underlying neural activity derived from conventional connectivity analyses. Within this framework, BRC may be implemented for the purpose of understanding performance variability both within and between participants. Published by Elsevier B.V.

  5. Human health risk assessment: models for predicting the effective exposure duration of on-site receptors exposed to contaminated groundwater.

    PubMed

    Baciocchi, Renato; Berardi, Simona; Verginelli, Iason

    2010-09-15

    Clean-up of contaminated sites is usually based on a risk-based approach for the definition of the remediation goals, which relies on the well known ASTM-RBCA standard procedure. In this procedure, migration of contaminants is described through simple analytical models and the source contaminants' concentration is supposed to be constant throughout the entire exposure period, i.e. 25-30 years. The latter assumption may often result over-protective of human health, leading to unrealistically low remediation goals. The aim of this work is to propose an alternative model taking in account the source depletion, while keeping the original simplicity and analytical form of the ASTM-RBCA approach. The results obtained by the application of this model are compared with those provided by the traditional ASTM-RBCA approach, by a model based on the source depletion algorithm of the RBCA ToolKit software and by a numerical model, allowing to assess its feasibility for inclusion in risk analysis procedures. The results discussed in this work are limited to on-site exposure to contaminated water by ingestion, but the approach proposed can be extended to other exposure pathways. Copyright 2010 Elsevier B.V. All rights reserved.

  6. Mortality table construction

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  7. Power flow analysis of two coupled plates with arbitrary characteristics

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1990-01-01

    In the last progress report (Feb. 1988) some results were presented for a parametric analysis on the vibrational power flow between two coupled plate structures using the mobility power flow approach. The results reported then were for changes in the structural parameters of the two plates, but with the two plates identical in their structural characteristics. Herein, limitation is removed. The vibrational power input and output are evaluated for different values of the structural damping loss factor for the source and receiver plates. In performing this parametric analysis, the source plate characteristics are kept constant. The purpose of this parametric analysis is to determine the most critical parameters that influence the flow of vibrational power from the source plate to the receiver plate. In the case of the structural damping parametric analysis, the influence of changes in the source plate damping is also investigated. The results obtained from the mobility power flow approach are compared to results obtained using a statistical energy analysis (SEA) approach. The significance of the power flow results are discussed together with a discussion and a comparison between the SEA results and the mobility power flow results. Furthermore, the benefits derived from using the mobility power flow approach are examined.

  8. Hotspot detection using image pattern recognition based on higher-order local auto-correlation

    NASA Astrophysics Data System (ADS)

    Maeda, Shimon; Matsunawa, Tetsuaki; Ogawa, Ryuji; Ichikawa, Hirotaka; Takahata, Kazuhiro; Miyairi, Masahiro; Kotani, Toshiya; Nojima, Shigeki; Tanaka, Satoshi; Nakagawa, Kei; Saito, Tamaki; Mimotogi, Shoji; Inoue, Soichi; Nosato, Hirokazu; Sakanashi, Hidenori; Kobayashi, Takumi; Murakawa, Masahiro; Higuchi, Tetsuya; Takahashi, Eiichi; Otsu, Nobuyuki

    2011-04-01

    Below 40nm design node, systematic variation due to lithography must be taken into consideration during the early stage of design. So far, litho-aware design using lithography simulation models has been widely applied to assure that designs are printed on silicon without any error. However, the lithography simulation approach is very time consuming, and under time-to-market pressure, repetitive redesign by this approach may result in the missing of the market window. This paper proposes a fast hotspot detection support method by flexible and intelligent vision system image pattern recognition based on Higher-Order Local Autocorrelation. Our method learns the geometrical properties of the given design data without any defects as normal patterns, and automatically detects the design patterns with hotspots from the test data as abnormal patterns. The Higher-Order Local Autocorrelation method can extract features from the graphic image of design pattern, and computational cost of the extraction is constant regardless of the number of design pattern polygons. This approach can reduce turnaround time (TAT) dramatically only on 1CPU, compared with the conventional simulation-based approach, and by distributed processing, this has proven to deliver linear scalability with each additional CPU.

  9. Quasi-Classical Asymptotics for the Pauli Operator

    NASA Astrophysics Data System (ADS)

    Sobolev, Alexander V.

    We study the behaviour of the sums of the eigenvalues of the Pauli operator in , in a magnetic field and electric field V(x) as the Planck constant ħ tends to zero and the magnetic field strength μ tends to infinity. We show that for the sum obeys the natural Weyl type formula where σ = (d- 2)/2 + γ, with an explicit constant Cγ, d. If the field B has a constant direction, then this formula is uniform in μ>= 0. The method is based on Colin de Verdiere's approach proposed in his work on ``magnetic bottles'' (Commun. Math Phys, 105 , 327-335 (1986)).

  10. High-level theoretical study of the reaction between hydroxyl and ammonia: Accurate rate constants from 200 to 2500 K

    DOE PAGES

    Nguyen, Thanh Lam; Stanton, John F.

    2017-06-02

    Hydrogen abstraction from NH 3 by OH to produce H 2O and NH 2 — an important reaction in combustion of NH 3 fuel — was studied with a theoretical approach that combines high level quantum chemistry and advanced chemical kinetics methods. Thermal rate constants calculated from first principles agree well (within 5 to 20%) with available experimental data over a temperature range that extends from 200 to 2500 K. Here, quantum mechanical tunneling effects were found to be important; they lead to a decided curvature and non-Arrhenius behavior for the rate constant.

  11. Spatially resolved quantitative mapping of thermomechanical properties and phase transition temperatures using scanning probe microscopy

    DOEpatents

    Jesse, Stephen; Kalinin, Sergei V; Nikiforov, Maxim P

    2013-07-09

    An approach for the thermomechanical characterization of phase transitions in polymeric materials (polyethyleneterephthalate) by band excitation acoustic force microscopy is developed. This methodology allows the independent measurement of resonance frequency, Q factor, and oscillation amplitude of a tip-surface contact area as a function of tip temperature, from which the thermal evolution of tip-surface spring constant and mechanical dissipation can be extracted. A heating protocol maintained a constant tip-surface contact area and constant contact force, thereby allowing for reproducible measurements and quantitative extraction of material properties including temperature dependence of indentation-based elastic and loss moduli.

  12. Gladstone-Dale constant for CF4. [experimental design

    NASA Technical Reports Server (NTRS)

    Burner, A. W., Jr.; Goad, W. K.

    1980-01-01

    The Gladstone-Dale constant, which relates the refractive index to density, was measured for CF4 by counting fringes of a two-beam interferometer, one beam of which passes through a cell containing the test gas. The experimental approach and sources of systematic and imprecision errors are discussed. The constant for CF4 was measured at several wavelengths in the visible region of the spectrum. A value of 0.122 cu cm/g with an uncertainty of plus or minus 0.001 cu cm/g was determined for use in the visible region. A procedure for noting the departure of the gas density from the ideal-gas law is discussed.

  13. Estimation of thermodynamic acidity constants of some penicillinase-resistant penicillins.

    PubMed

    Demiralay, Ebru Çubuk; Üstün, Zehra; Daldal, Y Doğan

    2014-03-01

    In this work, thermodynamic acidity constants (pssKa) of methicillin, oxacillin, nafcillin, cloxacilin, dicloxacillin were determined with reverse phase liquid chromatographic method (RPLC) by taking into account the effect of the activity coefficients in hydro-organic water-acetonitrile binary mixtures. From these values, thermodynamic aqueous acidity constants of these drugs were calculated by different approaches. The linear relationships established between retention factors of the species and the polarity parameter of the mobile phase (ET(N)) was proved to predict accurately retention in LC as a function of the acetonitrile content (38%, 40% and 42%, v/v). Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Theoretical lower bounds for parallel pipelined shift-and-add constant multiplications with n-input arithmetic operators

    NASA Astrophysics Data System (ADS)

    Cruz Jiménez, Miriam Guadalupe; Meyer Baese, Uwe; Jovanovic Dolecek, Gordana

    2017-12-01

    New theoretical lower bounds for the number of operators needed in fixed-point constant multiplication blocks are presented. The multipliers are constructed with the shift-and-add approach, where every arithmetic operation is pipelined, and with the generalization that n-input pipelined additions/subtractions are allowed, along with pure pipelining registers. These lower bounds, tighter than the state-of-the-art theoretical limits, are particularly useful in early design stages for a quick assessment in the hardware utilization of low-cost constant multiplication blocks implemented in the newest families of field programmable gate array (FPGA) integrated circuits.

  15. Reliable and accurate extraction of Hamaker constants from surface force measurements.

    PubMed

    Miklavcic, S J

    2018-08-15

    A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Simulations of four-dimensional simplicial quantum gravity as dynamical triangulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agishtein, M.E.; Migdal, A.A.

    1992-04-20

    In this paper, Four-Dimensional Simplicial Quantum Gravity is simulated using the dynamical triangulation approach. The authors studied simplicial manifolds of spherical topology and found the critical line for the cosmological constant as a function of the gravitational one, separating the phases of opened and closed Universe. When the bare cosmological constant approaches this line from above, the four-volume grows: the authors reached about 5 {times} 10{sup 4} simplexes, which proved to be sufficient for the statistical limit of infinite volume. However, for the genuine continuum theory of gravity, the parameters of the lattice model should be further adjusted to reachmore » the second order phase transition point, where the correlation length grows to infinity. The authors varied the gravitational constant, and they found the first order phase transition, similar to the one found in three-dimensional model, except in 4D the fluctuations are rather large at the transition point, so that this is close to the second order phase transition. The average curvature in cutoff units is large and positive in one phase (gravity), and small negative in another (antigravity). The authors studied the fractal geometry of both phases, using the heavy particle propagator to define the geodesic map, as well as with the old approach using the shortest lattice paths.« less

  17. Touchless attitude correction for satellite with constant magnetic moment

    NASA Astrophysics Data System (ADS)

    Ao, Hou-jun; Yang, Le-ping; Zhu, Yan-wei; Zhang, Yuan-wen; Huang, Huan

    2017-09-01

    Rescue of satellite with attitude fault is of great value. Satellite with improper injection attitude may lose contact with ground as the antenna points to the wrong direction, or encounter energy problems as solar arrays are not facing the sun. Improper uploaded command may set the attitude out of control, exemplified by Japanese Hitomi spacecraft. In engineering practice, traditional physical contact approaches have been applied, yet with a potential risk of collision and a lack of versatility since the mechanical systems are mission-specific. This paper puts forward a touchless attitude correction approach, in which three satellites are considered, one having constant dipole and two having magnetic coils to control attitude of the first. Particular correction configurations are designed and analyzed to maintain the target's orbit during the attitude correction process. A reference coordinate system is introduced to simplify the control process and avoid the singular value problem of Euler angles. Based on the spherical triangle basic relations, the accurate varying geomagnetic field is considered in the attitude dynamic mode. Sliding mode control method is utilized to design the correction law. Finally, numerical simulation is conducted to verify the theoretical derivation. It can be safely concluded that the no-contact attitude correction approach for the satellite with uniaxial constant magnetic moment is feasible and potentially applicable to on-orbit operations.

  18. Noninvasive determination of optical lever sensitivity in atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Higgins, M. J.; Proksch, R.; Sader, J. E.; Polcik, M.; Mc Endoo, S.; Cleveland, J. P.; Jarvis, S. P.

    2006-01-01

    Atomic force microscopes typically require knowledge of the cantilever spring constant and optical lever sensitivity in order to accurately determine the force from the cantilever deflection. In this study, we investigate a technique to calibrate the optical lever sensitivity of rectangular cantilevers that does not require contact to be made with a surface. This noncontact approach utilizes the method of Sader et al. [Rev. Sci. Instrum. 70, 3967 (1999)] to calibrate the spring constant of the cantilever in combination with the equipartition theorem [J. L. Hutter and J. Bechhoefer, Rev. Sci. Instrum. 64, 1868 (1993)] to determine the optical lever sensitivity. A comparison is presented between sensitivity values obtained from conventional static mode force curves and those derived using this noncontact approach for a range of different cantilevers in air and liquid. These measurements indicate that the method offers a quick, alternative approach for the calibration of the optical lever sensitivity.

  19. Computational Approaches to the Chemical Equilibrium Constant in Protein-ligand Binding.

    PubMed

    Montalvo-Acosta, Joel José; Cecchini, Marco

    2016-12-01

    The physiological role played by protein-ligand recognition has motivated the development of several computational approaches to the ligand binding affinity. Some of them, termed rigorous, have a strong theoretical foundation but involve too much computation to be generally useful. Some others alleviate the computational burden by introducing strong approximations and/or empirical calibrations, which also limit their general use. Most importantly, there is no straightforward correlation between the predictive power and the level of approximation introduced. Here, we present a general framework for the quantitative interpretation of protein-ligand binding based on statistical mechanics. Within this framework, we re-derive self-consistently the fundamental equations of some popular approaches to the binding constant and pinpoint the inherent approximations. Our analysis represents a first step towards the development of variants with optimum accuracy/efficiency ratio for each stage of the drug discovery pipeline. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. A novel modeling approach to the mixing process in twin-screw extruders

    NASA Astrophysics Data System (ADS)

    Kennedy, Amedu Osaighe; Penlington, Roger; Busawon, Krishna; Morgan, Andy

    2014-05-01

    In this paper, a theoretical model for the mixing process in a self-wiping co-rotating twin screw extruder by combination of statistical techniques and mechanistic modelling has been proposed. The approach was to examine the mixing process in the local zones via residence time distribution and the flow dynamics, from which predictive models of the mean residence time and mean time delay were determined. Increase in feed rate at constant screw speed was found to narrow the shape of the residence time distribution curve, reduction in the mean residence time and time delay and increase in the degree of fill. Increase in screw speed at constant feed rate was found to narrow the shape of the residence time distribution curve, decrease in the degree of fill in the extruder and thus an increase in the time delay. Experimental investigation was also done to validate the modeling approach.

  1. Clinical decision-making and therapeutic approaches in osteopathy - a qualitative grounded theory study.

    PubMed

    Thomson, Oliver P; Petty, Nicola J; Moore, Ann P

    2014-02-01

    There is limited understanding of how osteopaths make decisions in relation to clinical practice. The aim of this research was to construct an explanatory theory of the clinical decision-making and therapeutic approaches of experienced osteopaths in the UK. Twelve UK registered osteopaths participated in this constructivist grounded theory qualitative study. Purposive and theoretical sampling was used to select participants. Data was collected using semi-structured interviews which were audio-recorded and transcribed. As the study approached theoretical sufficiency, participants were observed and video-recorded during a patient appointment, which was followed by a video-prompted interview. Constant comparative analysis was used to analyse and code data. Data analysis resulted in the construction of three qualitatively different therapeutic approaches which characterised participants and their clinical practice, termed; Treater, Communicator and Educator. Participants' therapeutic approach influenced their approach to clinical decision-making, the level of patient involvement, their interaction with patients, and therapeutic goals. Participants' overall conception of practice lay on a continuum ranging from technical rationality to professional artistry, and contributed to their therapeutic approach. A range of factors were identified which influenced participants' conception of practice. The findings indicate that there is variation in osteopaths' therapeutic approaches to practice and clinical decision-making, which are influenced by their overall conception of practice. This study provides the first explanatory theory of the clinical decision-making and therapeutic approaches of osteopaths. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. The Science ELF: Assessing the enquiry levels framework as a heuristic for professional development

    NASA Astrophysics Data System (ADS)

    Wheeler, Lindsay B.; Bell, Randy L.; Whitworth, Brooke A.; Maeng, Jennifer L.

    2015-01-01

    This study utilized an explanatory sequential mixed methods approach to explore randomly assigned treatment and control participants' frequency of inquiry instruction in secondary science classrooms. Eleven treatment participants received professional development (PD) that emphasized a structured approach to inquiry instruction, while 10 control participants received no PD. Two representative treatment participants were interviewed and observed to provide an in-depth understanding of inquiry instruction and factors affecting implementation. Paired t-tests were used to analyze quantitative data from observation forms, and a constant comparative approach was used to analyze qualitative data from surveys, interviews, purposeful observations and artifacts. Results indicated that treatment participants implemented inquiry significantly more frequently than control participants (p < .01). Two treatment participants' instruction revealed that both used a similar structure of inquiry but employed different types of interactions and emphasized different scientific practices. These differences may be explained by the participants' understandings of and beliefs about inquiry and structuring inquiry. The present study has the potential to inform how methods of structuring inquiry instruction and teaching scientific practices are addressed in teacher preparation.

  3. Evaluation of automated sample preparation, retention time locked gas chromatography-mass spectrometry and data analysis methods for the metabolomic study of Arabidopsis species.

    PubMed

    Gu, Qun; David, Frank; Lynen, Frédéric; Rumpel, Klaus; Dugardeyn, Jasper; Van Der Straeten, Dominique; Xu, Guowang; Sandra, Pat

    2011-05-27

    In this paper, automated sample preparation, retention time locked gas chromatography-mass spectrometry (GC-MS) and data analysis methods for the metabolomics study were evaluated. A miniaturized and automated derivatisation method using sequential oximation and silylation was applied to a polar extract of 4 types (2 types×2 ages) of Arabidopsis thaliana, a popular model organism often used in plant sciences and genetics. Automation of the derivatisation process offers excellent repeatability, and the time between sample preparation and analysis was short and constant, reducing artifact formation. Retention time locked (RTL) gas chromatography-mass spectrometry was used, resulting in reproducible retention times and GC-MS profiles. Two approaches were used for data analysis. XCMS followed by principal component analysis (approach 1) and AMDIS deconvolution combined with a commercially available program (Mass Profiler Professional) followed by principal component analysis (approach 2) were compared. Several features that were up- or down-regulated in the different types were detected. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. The long reads ahead: de novo genome assembly using the MinION

    PubMed Central

    de Lannoy, Carlos; de Ridder, Dick; Risse, Judith

    2017-01-01

    Nanopore technology provides a novel approach to DNA sequencing that yields long, label-free reads of constant quality. The first commercial implementation of this approach, the MinION, has shown promise in various sequencing applications. This review gives an up-to-date overview of the MinION's utility as a de novo sequencing device. It is argued that the MinION may allow for portable and affordable de novo sequencing of even complex genomes in the near future, despite the currently error-prone nature of its reads. Through continuous updates to the MinION hardware and the development of new assembly pipelines, both sequencing accuracy and assembly quality have already risen rapidly. However, this fast pace of development has also lead to a lack of overview of the expanding landscape of analysis tools, as performance evaluations are outdated quickly. As the MinION is approaching a state of maturity, its user community would benefit from a thorough comparative benchmarking effort of de novo assembly pipelines in the near future. An earlier version of this article can be found on  bioRxiv. PMID:29375809

  5. Stabilization of nonlinear systems using sampled-data output-feedback fuzzy controller based on polynomial-fuzzy-model-based control approach.

    PubMed

    Lam, H K

    2012-02-01

    This paper investigates the stability of sampled-data output-feedback (SDOF) polynomial-fuzzy-model-based control systems. Representing the nonlinear plant using a polynomial fuzzy model, an SDOF fuzzy controller is proposed to perform the control process using the system output information. As only the system output is available for feedback compensation, it is more challenging for the controller design and system analysis compared to the full-state-feedback case. Furthermore, because of the sampling activity, the control signal is kept constant by the zero-order hold during the sampling period, which complicates the system dynamics and makes the stability analysis more difficult. In this paper, two cases of SDOF fuzzy controllers, which either share the same number of fuzzy rules or not, are considered. The system stability is investigated based on the Lyapunov stability theory using the sum-of-squares (SOS) approach. SOS-based stability conditions are obtained to guarantee the system stability and synthesize the SDOF fuzzy controller. Simulation examples are given to demonstrate the merits of the proposed SDOF fuzzy control approach.

  6. Power flow analysis of two coupled plates with arbitrary characteristics

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1988-01-01

    The limitation of keeping two plates identical is removed and the vibrational power input and output are evaluated for different area ratios, plate thickness ratios, and for different values of the structural damping loss factor for the source plate (plate with excitation) and the receiver plate. In performing this parametric analysis, the source plate characteristics are kept constant. The purpose of this parametric analysis is to be able to determine the most critical parameters that influence the flow of vibrational power from the source plate to the receiver plate. In the case of the structural damping parametric analysis, the influence of changes in the source plate damping is also investigated. As was done previously, results obtained from the mobility power flow approach will be compared to results obtained using a statistical energy analysis (SEA) approach. The significance of the power flow results are discussed together with a discussion and a comparison between SEA results and the mobility power flow results. Furthermore, the benefits that can be derived from using the mobility power flow approach, are also examined.

  7. An experimental comparison of several current viscoplastic constitutive models at elevated temperature

    NASA Technical Reports Server (NTRS)

    James, G. H.; Imbrie, P. K.; Hill, P. S.; Allen, D. H.; Haisler, W. E.

    1988-01-01

    Four current viscoplastic models are compared experimentally for Inconel 718 at 593 C. This material system responds with apparent negative strain rate sensitivity, undergoes cyclic work softening, and is susceptible to low cycle fatigue. A series of tests were performed to create a data base from which to evaluate material constants. A method to evaluate the constants is developed which draws on common assumptions for this type of material, recent advances by other researchers, and iterative techniques. A complex history test, not used in calculating the constants, is then used to compare the predictive capabilities of the models. The combination of exponentially based inelastic strain rate equations and dynamic recovery is shown to model this material system with the greatest success. The method of constant calculation developed was successfully applied to the complex material response encountered. Backstress measuring tests were found to be invaluable and to warrant further development.

  8. Generalization of Solovev’s approach to finding equilibrium solutions for axisymmetric plasmas with flow

    NASA Astrophysics Data System (ADS)

    M, S. CHU; Yemin, HU; Wenfeng, GUO

    2018-03-01

    Solovev’s approach of finding equilibrium solutions was found to be extremely useful for generating a library of linear-superposable equilibria for the purpose of shaping studies. This set of solutions was subsequently expanded to include the vacuum solutions of Zheng, Wootton and Solano, resulting in a set of functions {SOLOVEV_ZWS} that were usually used for all toroidally symmetric plasmas, commonly recognized as being able to accommodate any desired plasma shapes (complete-shaping capability). The possibility of extending the Solovev approach to toroidal equilibria with a general plasma flow is examined theoretically. We found that the only meaningful extension is to plasmas with a pure toroidal rotation and with a constant Mach number. We also show that the simplification ansatz made to the current profiles, which was the basis of the Solovev approach, should be applied more systematically to include an internal boundary condition at the magnetic axis; resulting in a modified and more useful set {SOLOVEV_ZWSm}. Explicit expressions of functions in this set are given for equilibria with a quasi-constant current density profile, with a toroidal flow at a constant Mach number and with specific heat capacity 1. The properties of {SOLOVEV_ZWSm} are studied analytically. Numerical examples of achievable equilibria are demonstrated. Although the shaping capability of the set {SOLOVE_ZWSm} is quite extensive, it nevertheless still does not have complete shaping capability, particularly for plasmas with negative curvature points on the plasma boundary such as the doublets or indented bean shaped tokamaks.

  9. The READY program: Building a global potential energy surface and reactive dynamic simulations for the hydrogen combustion.

    PubMed

    Mogo, César; Brandão, João

    2014-06-30

    READY (REActive DYnamics) is a program for studying reactive dynamic systems using a global potential energy surface (PES) built from previously existing PESs corresponding to each of the most important elementary reactions present in the system. We present an application to the combustion dynamics of a mixture of hydrogen and oxygen using accurate PESs for all the systems involving up to four oxygen and hydrogen atoms. Results at the temperature of 4000 K and pressure of 2 atm are presented and compared with model based on rate constants. Drawbacks and advantages of this approach are discussed and future directions of research are pointed out. Copyright © 2014 Wiley Periodicals, Inc.

  10. Comparison of diffusion length measurements from the Flying Spot Technique and the photocarrier grating method in amorphous thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vieira, M.; Fantoni, A.; Martins, R.

    1994-12-31

    Using the Flying Spot Technique (FST) the authors have studied minority carrier transport parallel and perpendicular to the surface of amorphous silicon films (a-Si:H). To reduce slow transients due to charge redistribution in low resistivity regions during the measurement they have applied a strong homogeneously absorbed bias light. The defect density was estimated from Constant Photocurrent Method (CPM) measurements. The steady-state photocarrier grating technique (SSPG) is a 1-dimensional approach. However, the modulation depth of the carrier profile is also dependent on film surface properties, like surface recombination velocity. Both methods yield comparable diffusion lengths when applied to a-Si:H.

  11. Left behind in the return-to-work journey: consumer insights for policy change and practice strategies.

    PubMed

    Korzycki, Monica; Korzycki, Martha; Shaw, Lynn

    2008-01-01

    This study examined system barriers that precluded injured workers from accessing services and supports in the return-to-work (RTW) process. A grounded theory approach was used to investigate injured worker experiences. Methods included in-depth telephone interviews and the constant comparative method to analyze the data. Findings revealed that consumers experienced tensions or a tug-of-war between the RTW system, the health care system, and in accessing and using knowledge. Over time consumers reflected upon these tensions and initiated strategies to enhance return to function and RTW. Insights from consumer-driven strategies that might inform future policy change and promote positive service delivery for injured workers are examined.

  12. Behavior of very high energy hadronic cross-sections

    NASA Astrophysics Data System (ADS)

    Stodolsky, L.

    2017-10-01

    Analysis of the data for proton and antiproton scattering leads to a simple picture for very high energy hadronic cross-sections. There is, asymptotically, a simple “black disc” with a smooth “edge”. The radius of the “disc” is expanding logarithmically with energy, while the “edge” is constant. These conclusions follow from extensive fits to accelerator and cosmic ray data, combined with the observation that a certain combination of elastic and total cross-sections allows extraction of the “edge”. An interesting feature of the results is that the “edge” is rather large compared to the “disc”. This explains the slow approach to “asymptopia” where the “disc” finally dominates.

  13. Finite element modeling of frictionally restrained composite interfaces

    NASA Technical Reports Server (NTRS)

    Ballarini, Roberto; Ahmed, Shamim

    1989-01-01

    The use of special interface finite elements to model frictional restraint in composite interfaces is described. These elements simulate Coulomb friction at the interface, and are incorporated into a standard finite element analysis of a two-dimensional isolated fiber pullout test. Various interfacial characteristics, such as the distribution of stresses at the interface, the extent of slip and delamination, load diffusion from fiber to matrix, and the amount of fiber extraction or depression are studied for different friction coefficients. The results are compared to those obtained analytically using a singular integral equation approach, and those obtained by assuming a constant interface shear strength. The usefulness of these elements in micromechanical modeling of fiber-reinforced composite materials is highlighted.

  14. Light clusters and pasta phases in warm and dense nuclear matter

    NASA Astrophysics Data System (ADS)

    Avancini, Sidney S.; Ferreira, Márcio; Pais, Helena; Providência, Constança; Röpke, Gerd

    2017-04-01

    The pasta phases are calculated for warm stellar matter in a framework of relativistic mean-field models, including the possibility of light cluster formation. Results from three different semiclassical approaches are compared with a quantum statistical calculation. Light clusters are considered as point-like particles, and their abundances are determined from the minimization of the free energy. The couplings of the light clusters to mesons are determined from experimental chemical equilibrium constants and many-body quantum statistical calculations. The effect of these light clusters on the chemical potentials is also discussed. It is shown that, by including heavy clusters, light clusters are present up to larger nucleonic densities, although with smaller mass fractions.

  15. Microstructure development in Kolmogorov, Johnson-Mehl, and Avrami nucleation and growth kinetics

    NASA Astrophysics Data System (ADS)

    Pineda, Eloi; Crespo, Daniel

    1999-08-01

    A statistical model with the ability to evaluate the microstructure developed in nucleation and growth kinetics is built in the framework of the Kolmogorov, Johnson-Mehl, and Avrami theory. A populational approach is used to compute the observed grain-size distribution. The impingement process which delays grain growth is analyzed, and the effective growth rate of each population is estimated considering the previous grain history. The proposed model is integrated for a wide range of nucleation and growth protocols, including constant nucleation, pre-existing nuclei, and intermittent nucleation with interface or diffusion-controlled grain growth. The results are compared with Monte Carlo simulations, giving quantitative agreement even in cases where previous models fail.

  16. Short-term effect of Keyes' approach to periodontal therapy compared with modified Widman flap surgery.

    PubMed

    Whitehead, S P; Watts, T L

    1987-11-01

    Keyes' method of non-surgical therapy was compared with modified Widman flap surgery in 9 patients with symmetrical periodontal disease. Following an initial oral hygiene programme, baseline measurements were recorded and paired contralateral areas were subjected randomly to the 2 techniques. 42 teeth receiving surgery were compared with 40 treated by Keyes' method. 6 sites per tooth were scored immediately prior to therapy and 3 months later, using a constant force probe with onlays. Consistent data were recorded for the 6 separate sites, which showed no baseline difference between treatments, slightly greater recession with surgery at 3 months, but no difference between treatments in probing depth and attachment levels. Mean data for individual patients showed similar consistency. Probing depth in deep sites was reduced slightly more with surgery, and there were no differences in bleeding on probing at 3 months. Both techniques gave marked improvements in health. Surprisingly, only 2 subjects preferred Keyes' technique of mechanical therapy, 6 preferred surgery, and 1 had no preference.

  17. Experimental and theoretical NMR and IR studies of the side-chain orientation effects on the backbone conformation of dehydrophenylalanine residue.

    PubMed

    Buczek, Aneta M; Ptak, Tomasz; Kupka, Teobald; Broda, Małgorzata A

    2011-06-01

    Conformation of N-acetyl-(E)-dehydrophenylalanine N', N'-dimethylamide (Ac-(E)-ΔPhe-NMe(2)) in solution, a member of (E)-α, β-dehydroamino acids, was studied by NMR and infrared spectroscopy and the results were compared with those obtained for (Z) isomer. To support the spectroscopic interpretation, the Φ, Ψ potential energy surfaces were calculated at the MP2/6-31 + G(d,p) level of theory in chloroform solution modeled by the self-consistent reaction field-polarizable continuum model method. All minima were fully optimized by the MP2 method and their relative stabilities were analyzed in terms of π-conjugation, internal H-bonds and dipole interactions between carbonyl groups. The obtained NMR spectral features were compared with theoretical nuclear magnetic shieldings, calculated using Gauge Independent Atomic Orbitals (GIAO) approach and rescaled to theoretical chemical shifts using benzene as reference. The calculated indirect nuclear spin-spin coupling constants were compared with available experimental parameters. Copyright © 2011 John Wiley & Sons, Ltd.

  18. Local conformational dynamics in alpha-helices measured by fast triplet transfer.

    PubMed

    Fierz, Beat; Reiner, Andreas; Kiefhaber, Thomas

    2009-01-27

    Coupling fast triplet-triplet energy transfer (TTET) between xanthone and naphthylalanine to the helix-coil equilibrium in alanine-based peptides allowed the observation of local equilibrium fluctuations in alpha-helices on the nanoseconds to microseconds time scale. The experiments revealed faster helix unfolding in the terminal regions compared with the central parts of the helix with time constants varying from 250 ns to 1.4 micros at 5 degrees C. Local helix formation occurs with a time constant of approximately 400 ns, independent of the position in the helix. Comparing the experimental data with simulations using a kinetic Ising model showed that the experimentally observed dynamics can be explained by a 1-dimensional boundary diffusion with position-independent elementary time constants of approximately 50 ns for the addition and of approximately 65 ns for the removal of an alpha-helical segment. The elementary time constant for helix growth agrees well with previously measured time constants for formation of short loops in unfolded polypeptide chains, suggesting that helix elongation is mainly limited by a conformational search.

  19. Arrhenius time-scaled least squares: a simple, robust approach to accelerated stability data analysis for bioproducts.

    PubMed

    Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F

    2014-08-01

    Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  20. Oligomer formation in the troposphere: from experimental knowledge to 3-D modeling

    NASA Astrophysics Data System (ADS)

    Lemaire, V.; Coll, I.; Couvidat, F.; Mouchel-Vallon, C.; Seigneur, C.; Siour, G.

    2015-10-01

    The organic fraction of atmospheric aerosols has proven to be a critical element of air quality and climate issues. However, its composition and the aging processes it undergoes remain insufficiently understood. This work builds on laboratory knowledge to simulate the formation of oligomers from biogenic secondary organic aerosol (BSOA) in the troposphere at the continental scale. We compare the results of two different modeling approaches, a 1st-order kinetic process and a pH-dependent parameterization, both implemented in the CHIMERE air quality model (AQM), to simulate the spatial and temporal distribution of oligomerized SOA over western Europe. Our results show that there is a strong dependence of the results on the selected modeling approach: while the irreversible kinetic process leads to the oligomerization of about 50 % of the total BSOA mass, the pH-dependent approach shows a broader range of impacts, with a strong dependency on environmental parameters (pH and nature of aerosol) and the possibility for the process to be reversible. In parallel, we investigated the sensitivity of each modeling approach to the representation of SOA precursor solubility (Henry's law constant values). Finally, the pros and cons of each approach for the representation of SOA aging are discussed and recommendations are provided to improve current representations of oligomer formation in AQMs.

  1. A New "Quasi-Dynamic" Method for Determining the Hamaker Constant of Solids Using an Atomic Force Microscope.

    PubMed

    Fronczak, Sean G; Dong, Jiannan; Browne, Christopher A; Krenek, Elizabeth C; Franses, Elias I; Beaudoin, Stephen P; Corti, David S

    2017-01-24

    In order to minimize the effects of surface roughness and deformation, a new method for estimating the Hamaker constant, A, of solids using the approach-to-contact regime of an atomic force microscope (AFM) is presented. First, a previous "jump-into-contact" quasi-static method for determining A from AFM measurements is analyzed and then extended to include various AFM tip-surface force models of interest. Then, to test the efficacy of the "jump-into-contact" method, a dynamic model of the AFM tip motion is developed. For finite AFM cantilever-surface approach speeds, a true "jump" point, or limit of stability, is found not to appear, and the quasi-static model fails to represent the dynamic tip behavior at close tip-surface separations. Hence, a new "quasi-dynamic" method for estimating A is proposed that uses the dynamically well-defined deflection at which the tip and surface first come into contact, d c , instead of the dynamically ill-defined "jump" point. With the new method, an apparent Hamaker constant, A app , is calculated from d c and a corresponding quasi-static-based equation. Since A app depends on the cantilever's approach speed, v c , and the AFM's sampling resolution, δ, a double extrapolation procedure is used to determine A app in the quasi-static (v c → 0) and continuous sampling (δ → 0) limits, thereby recovering the "true" value of A. The accuracy of the new method is validated using simulated AFM data. To enable the experimental implementation of this method, a new dimensionless parameter τ is introduced to guide cantilever selection and the AFM operating conditions. The value of τ quantifies how close a given cantilever is to its quasi-static limit for a chosen cantilever-surface approach speed. For sufficiently small values of τ (i.e., a cantilever that effectively behaves "quasi-statically"), simulated data indicate that A app will be within ∼3% or less of the inputted value of the Hamaker constant. This implies that Hamaker constants can be reliably estimated using a single measurement taken with an appropriately chosen cantilever and a slow, yet practical, approach speed (with no extrapolation required). This result is confirmed by the very good agreement found between the experimental AFM results obtained using this new method and previously reported predictions of A for amorphous silica, polystyrene, and α-Al 2 O 3 substrates obtained using the Lifshitz method.

  2. Indirect NMR spin-spin coupling constants in diatomic alkali halides

    NASA Astrophysics Data System (ADS)

    Jaszuński, Michał; Antušek, Andrej; Demissie, Taye B.; Komorovsky, Stanislav; Repisky, Michal; Ruud, Kenneth

    2016-12-01

    We report the Nuclear Magnetic Resonance (NMR) spin-spin coupling constants for diatomic alkali halides MX, where M = Li, Na, K, Rb, or Cs and X = F, Cl, Br, or I. The coupling constants are determined by supplementing the non-relativistic coupled-cluster singles-and-doubles (CCSD) values with relativistic corrections evaluated at the four-component density-functional theory (DFT) level. These corrections are calculated as the differences between relativistic and non-relativistic values determined using the PBE0 functional with 50% exact-exchange admixture. The total coupling constants obtained in this approach are in much better agreement with experiment than the standard relativistic DFT values with 25% exact-exchange, and are also noticeably better than the relativistic PBE0 results obtained with 50% exact-exchange. Further improvement is achieved by adding rovibrational corrections, estimated using literature data.

  3. On the anisotropic elastic properties of hydroxyapatite.

    NASA Technical Reports Server (NTRS)

    Katz, J. L.; Ukraincik, K.

    1971-01-01

    Experimental measurements of the isotropic elastic moduli on polycrystalline specimens of hydroxyapatite and fluorapatite are compared with elastic constants measured directly from single crystals of fluorapatite in order to derive a set of pseudo single crystal elastic constants for hydroxyapatite. The stiffness coefficients thus derived are given. The anisotropic and isotropic elastic properties are then computed and compared with similar properties derived from experimental observations of the anisotropic behavior of bone.

  4. The Role of Theory in Practice.

    ERIC Educational Resources Information Center

    Pyfer, Jean L.

    There are at least three ways in which educational theory can be used in practice: (1) to reexamine our traditional approaches, (2) to provide direction in future practice, and (3) to generate research. Reexamination of traditional approaches through analysis and utilization of theoretical methods is one means of promoting constant growth and…

  5. The role of glacier changes and threshold definition in the characterisation of future streamflow droughts in glacierised catchments

    NASA Astrophysics Data System (ADS)

    Van Tiel, Marit; Teuling, Adriaan J.; Wanders, Niko; Vis, Marc J. P.; Stahl, Kerstin; Van Loon, Anne F.

    2018-01-01

    Glaciers are essential hydrological reservoirs, storing and releasing water at various timescales. Short-term variability in glacier melt is one of the causes of streamflow droughts, here defined as deficiencies from the flow regime. Streamflow droughts in glacierised catchments have a wide range of interlinked causing factors related to precipitation and temperature on short and long timescales. Climate change affects glacier storage capacity, with resulting consequences for discharge regimes and streamflow drought. Future projections of streamflow drought in glacierised basins can, however, strongly depend on the modelling strategies and analysis approaches applied. Here, we examine the effect of different approaches, concerning the glacier modelling and the drought threshold, on the characterisation of streamflow droughts in glacierised catchments. Streamflow is simulated with the Hydrologiska Byråns Vattenbalansavdelning (HBV-light) model for two case study catchments, the Nigardsbreen catchment in Norway and the Wolverine catchment in Alaska, and two future climate change scenarios (RCP4.5 and RCP8.5). Two types of glacier modelling are applied, a constant and dynamic glacier area conceptualisation. Streamflow droughts are identified with the variable threshold level method and their characteristics are compared between two periods, a historical (1975-2004) and future (2071-2100) period. Two existing threshold approaches to define future droughts are employed: (1) the threshold from the historical period; (2) a transient threshold approach, whereby the threshold adapts every year in the future to the changing regimes. Results show that drought characteristics differ among the combinations of glacier area modelling and thresholds. The historical threshold combined with a dynamic glacier area projects extreme increases in drought severity in the future, caused by the regime shift due to a reduction in glacier area. The historical threshold combined with a constant glacier area results in a drastic decrease of the number of droughts. The drought characteristics between future and historical periods are more similar when the transient threshold is used, for both glacier area conceptualisations. With the transient threshold, factors causing future droughts can be analysed. This study revealed the different effects of methodological choices on future streamflow drought projections and it highlights how the options can be used to analyse different aspects of future droughts: the transient threshold for analysing future drought processes, the historical threshold to assess changes between periods, the constant glacier area to analyse the effect of short-term climate variability on droughts and the dynamic glacier area to model more realistic future discharges under climate change.

  6. Connection equation and shaly-sand correction for electrical resistivity

    USGS Publications Warehouse

    Lee, Myung W.

    2011-01-01

    Estimating the amount of conductive and nonconductive constituents in the pore space of sediments by using electrical resistivity logs generally loses accuracy where clays are present in the reservoir. Many different methods and clay models have been proposed to account for the conductivity of clay (termed the shaly-sand correction). In this study, the connectivity equation (CE), which is a new approach to model non-Archie rocks, is used to correct for the clay effect and is compared with results using the Waxman and Smits method. The CE presented here requires no parameters other than an adjustable constant, which can be derived from the resistivity of water-saturated sediments. The new approach was applied to estimate water saturation of laboratory data and to estimate gas hydrate saturations at the Mount Elbert well on the Alaska North Slope. Although not as accurate as the Waxman and Smits method to estimate water saturations for the laboratory measurements, gas hydrate saturations estimated at the Mount Elbert well using the proposed CE are comparable to estimates from the Waxman and Smits method. Considering its simplicity, it has high potential to be used to account for the clay effect on electrical resistivity measurement in other systems.

  7. A Comparison of Traditional, Step-Path, and Geostatistical Techniques in the Stability Analysis of a Large Open Pit

    NASA Astrophysics Data System (ADS)

    Mayer, J. M.; Stead, D.

    2017-04-01

    With the increased drive towards deeper and more complex mine designs, geotechnical engineers are often forced to reconsider traditional deterministic design techniques in favour of probabilistic methods. These alternative techniques allow for the direct quantification of uncertainties within a risk and/or decision analysis framework. However, conventional probabilistic practices typically discretize geological materials into discrete, homogeneous domains, with attributes defined by spatially constant random variables, despite the fact that geological media display inherent heterogeneous spatial characteristics. This research directly simulates this phenomenon using a geostatistical approach, known as sequential Gaussian simulation. The method utilizes the variogram which imposes a degree of controlled spatial heterogeneity on the system. Simulations are constrained using data from the Ok Tedi mine site in Papua New Guinea and designed to randomly vary the geological strength index and uniaxial compressive strength using Monte Carlo techniques. Results suggest that conventional probabilistic techniques have a fundamental limitation compared to geostatistical approaches, as they fail to account for the spatial dependencies inherent to geotechnical datasets. This can result in erroneous model predictions, which are overly conservative when compared to the geostatistical results.

  8. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  9. Thin SiGe virtual substrates for Ge heterostructures integration on silicon

    NASA Astrophysics Data System (ADS)

    Cecchi, S.; Gatti, E.; Chrastina, D.; Frigerio, J.; Müller Gubler, E.; Paul, D. J.; Guzzi, M.; Isella, G.

    2014-03-01

    The possibility to reduce the thickness of the SiGe virtual substrate, required for the integration of Ge heterostructures on Si, without heavily affecting the crystal quality is becoming fundamental in several applications. In this work, we present 1 μm thick Si1-xGex buffers (with x > 0.7) having different designs which could be suitable for applications requiring a thin virtual substrate. The rationale is to reduce the lattice mismatch at the interface with the Si substrate by introducing composition steps and/or partial grading. The relatively low growth temperature (475 °C) makes this approach appealing for complementary metal-oxide-semiconductor integration. For all the investigated designs, a reduction of the threading dislocation density compared to constant composition Si1-xGex layers was observed. The best buffer in terms of defects reduction was used as a virtual substrate for the deposition of a Ge/SiGe multiple quantum well structure. Room temperature optical absorption and photoluminescence analysis performed on nominally identical quantum wells grown on both a thick graded virtual substrate and the selected thin buffer demonstrates a comparable optical quality, confirming the effectiveness of the proposed approach.

  10. Autonomous Navigation for Autonomous Underwater Vehicles Based on Information Filters and Active Sensing

    PubMed Central

    He, Bo; Zhang, Hongjin; Li, Chao; Zhang, Shujing; Liang, Yan; Yan, Tianhong

    2011-01-01

    This paper addresses an autonomous navigation method for the autonomous underwater vehicle (AUV) C-Ranger applying information-filter-based simultaneous localization and mapping (SLAM), and its sea trial experiments in Tuandao Bay (Shangdong Province, P.R. China). Weak links in the information matrix in an extended information filter (EIF) can be pruned to achieve an efficient approach-sparse EIF algorithm (SEIF-SLAM). All the basic update formulae can be implemented in constant time irrespective of the size of the map; hence the computational complexity is significantly reduced. The mechanical scanning imaging sonar is chosen as the active sensing device for the underwater vehicle, and a compensation method based on feedback of the AUV pose is presented to overcome distortion of the acoustic images due to the vehicle motion. In order to verify the feasibility of the navigation methods proposed for the C-Ranger, a sea trial was conducted in Tuandao Bay. Experimental results and analysis show that the proposed navigation approach based on SEIF-SLAM improves the accuracy of the navigation compared with conventional method; moreover the algorithm has a low computational cost when compared with EKF-SLAM. PMID:22346682

  11. Autonomous navigation for autonomous underwater vehicles based on information filters and active sensing.

    PubMed

    He, Bo; Zhang, Hongjin; Li, Chao; Zhang, Shujing; Liang, Yan; Yan, Tianhong

    2011-01-01

    This paper addresses an autonomous navigation method for the autonomous underwater vehicle (AUV) C-Ranger applying information-filter-based simultaneous localization and mapping (SLAM), and its sea trial experiments in Tuandao Bay (Shangdong Province, P.R. China). Weak links in the information matrix in an extended information filter (EIF) can be pruned to achieve an efficient approach-sparse EIF algorithm (SEIF-SLAM). All the basic update formulae can be implemented in constant time irrespective of the size of the map; hence the computational complexity is significantly reduced. The mechanical scanning imaging sonar is chosen as the active sensing device for the underwater vehicle, and a compensation method based on feedback of the AUV pose is presented to overcome distortion of the acoustic images due to the vehicle motion. In order to verify the feasibility of the navigation methods proposed for the C-Ranger, a sea trial was conducted in Tuandao Bay. Experimental results and analysis show that the proposed navigation approach based on SEIF-SLAM improves the accuracy of the navigation compared with conventional method; moreover the algorithm has a low computational cost when compared with EKF-SLAM.

  12. Quantification of gap junction selectivity.

    PubMed

    Ek-Vitorín, Jose F; Burt, Janis M

    2005-12-01

    Gap junctions, which are essential for functional coordination and homeostasis within tissues, permit the direct intercellular exchange of small molecules. The abundance and diversity of this exchange depends on the number and selectivity of the comprising channels and on the transjunctional gradient for and chemical character of the permeant molecules. Limited knowledge of functionally significant permeants and poor detectability of those few that are known have made it difficult to define channel selectivity. Presented herein is a multifaceted approach to the quantification of gap junction selectivity that includes determination of the rate constant for intercellular diffusion of a fluorescent probe (k2-DYE) and junctional conductance (gj) for each junction studied, such that the selective permeability (k2-DYE/gj) for dyes with differing chemical characteristics or junctions with differing connexin (Cx) compositions (or treatment conditions) can be compared. In addition, selective permeability can be correlated using single-channel conductance when this parameter is also measured. Our measurement strategy is capable of detecting 1) rate constants and selective permeabilities that differ across three orders of magnitude and 2) acute changes in that rate constant. Using this strategy, we have shown that 1) the selective permeability of Cx43 junctions to a small cationic dye varied across two orders of magnitude, consistent with the hypothesis that the various channel configurations adopted by Cx43 display different selective permeabilities; and 2) the selective permeability of Cx37 vs. Cx43 junctions was consistently and significantly lower.

  13. Evolution of flexural rigidity according to the cross-sectional dimension of a superelastic nickel titanium orthodontic wire.

    PubMed

    Garrec, Pascal; Tavernier, Bruno; Jordan, Laurence

    2005-08-01

    The choice of the most suitable orthodontic wire for each stage of treatment requires estimation of the forces generated. In theory, the selection of wire sequences should initially utilize a lower flexural rigidity; thus clinicians use smaller round cross-sectional dimension wires to generate lighter forces during the preliminary alignment stage. This assessment is true for conventional alloys, but not necessarily for superelastic nickel titanium (NiTi). In this case, the flexural rigidity dependence on cross-sectional dimension differs from the linear elasticity prediction because of the martensitic transformation process. It decreases with increasing deflection and this phenomenon is accentuated in the unloading process. This behaviour should lead us to consider differently the biomechanical approach to orthodontic treatment. The present study compared bending in 10 archwires made from NiTi orthodontics alloy of two cross-sectional dimensions. The results were based on microstructural and mechanical investigations. With conventional alloys, the flexural rigidity was constant for each wire and increased largely with the cross-sectional dimension for the same strain. With NiTi alloys, the flexural rigidity is not constant and the influence of size was not as important as it should be. This result can be explained by the non-constant elastic modulus during the martensite transformation process. Thus, in some cases, treatment can begin with full-size (rectangular) wires that nearly fill the bracket slot with a force application deemed to be physiologically desirable for tooth movement and compatible with patient comfort.

  14. An equivalent method of mixed dielectric constant in passive microwave/millimeter radiometric measurement

    NASA Astrophysics Data System (ADS)

    Su, Jinlong; Tian, Yan; Hu, Fei; Gui, Liangqi; Cheng, Yayun; Peng, Xiaohui

    2017-10-01

    Dielectric constant is an important role to describe the properties of matter. This paper proposes This paper proposes the concept of mixed dielectric constant(MDC) in passive microwave radiometric measurement. In addition, a MDC inversion method is come up, Ratio of Angle-Polarization Difference(RAPD) is utilized in this method. The MDC of several materials are investigated using RAPD. Brightness temperatures(TBs) which calculated by MDC and original dielectric constant are compared. Random errors are added to the simulation to test the robustness of the algorithm. Keywords: Passive detection, microwave/millimeter, radiometric measurement, ratio of angle-polarization difference (RAPD), mixed dielectric constant (MDC), brightness temperatures, remote sensing, target recognition.

  15. Polar versus Cartesian velocity models for maneuvering target tracking with IMM

    NASA Astrophysics Data System (ADS)

    Laneuville, Dann

    This paper compares various model sets in different IMM filters for the maneuvering target tracking problem. The aim is to see whether we can improve the tracking performance of what is certainly the most widely used model set in the literature for the maneuvering target tracking problem: a Nearly Constant Velocity model and a Nearly Coordinated Turn model. Our new challenger set consists of a mixed Cartesian position and polar velocity state vector to describe the uniform motion segments and is augmented with the turn rate to obtain the second model for the maneuvering segments. This paper also gives a general procedure to discretize up to second order any non-linear continuous time model with linear diffusion. Comparative simulations on an air defence scenario with a 2D radar, show that this new approach improves significantly the tracking performance in this case.

  16. Computational Simulation of the High Strain Rate Tensile Response of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Goldberg, Robert K.

    2002-01-01

    A research program is underway to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to high strain rate impact loads. Under these types of loading conditions, the material response can be highly strain rate dependent and nonlinear. State variable constitutive equations based on a viscoplasticity approach have been developed to model the deformation of the polymer matrix. The constitutive equations are then combined with a mechanics of materials based micromechanics model which utilizes fiber substructuring to predict the effective mechanical and thermal response of the composite. To verify the analytical model, tensile stress-strain curves are predicted for a representative composite over strain rates ranging from around 1 x 10(exp -5)/sec to approximately 400/sec. The analytical predictions compare favorably to experimentally obtained values both qualitatively and quantitatively. Effective elastic and thermal constants are predicted for another composite, and compared to finite element results.

  17. A Constant-Factor Approximation Algorithm for the Link Building Problem

    NASA Astrophysics Data System (ADS)

    Olsen, Martin; Viglas, Anastasios; Zvedeniouk, Ilia

    In this work we consider the problem of maximizing the PageRank of a given target node in a graph by adding k new links. We consider the case that the new links must point to the given target node (backlinks). Previous work [7] shows that this problem has no fully polynomial time approximation schemes unless P = NP. We present a polynomial time algorithm yielding a PageRank value within a constant factor from the optimal. We also consider the naive algorithm where we choose backlinks from nodes with high PageRank values compared to the outdegree and show that the naive algorithm performs much worse on certain graphs compared to the constant factor approximation scheme.

  18. A hybrid multigroup neutron-pattern model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogosbekyan, L.R.; Lysov, D.A.

    In this paper, we use the general approach to construct a multigroup hybrid model for the neutron pattern. The equations are given together with a reasonably economic and simple iterative method of solving them. The algorithm can be used to calculate the pattern and the functionals as well as to correct the constants from the experimental data and to adapt the support over the constants to the engineering programs by reference to precision ones.

  19. Constant fields and constant gradients in open ionic channels.

    PubMed Central

    Chen, D P; Barcilon, V; Eisenberg, R S

    1992-01-01

    Ions enter cells through pores in proteins that are holes in dielectrics. The energy of interaction between ion and charge induced on the dielectric is many kT, and so the dielectric properties of channel and pore are important. We describe ionic movement by (three-dimensional) Nemst-Planck equations (including flux and net charge). Potential is described by Poisson's equation in the pore and Laplace's equation in the channel wall, allowing induced but not permanent charge. Asymptotic expansions are constructed exploiting the long narrow shape of the pore and the relatively high dielectric constant of the pore's contents. The resulting one-dimensional equations can be integrated numerically; they can be analyzed when channels are short or long (compared with the Debye length). Traditional constant field equations are derived if the induced charge is small, e.g., if the channel is short or if the total concentration gradient is zero. A constant gradient of concentration is derived if the channel is long. Plots directly comparable to experiments are given of current vs voltage, reversal potential vs. concentration, and slope conductance vs. concentration. This dielectric theory can easily be tested: its parameters can be determined by traditional constant field measurements. The dielectric theory then predicts current-voltage relations quite different from constant field, usually more linear, when gradients of total concentration are imposed. Numerical analysis shows that the interaction of ion and channel can be described by a mean potential if, but only if, the induced charge is negligible, that is to say, the electric field is spatially constant. Images FIGURE 1 PMID:1376159

  20. Reflectance and optical constants for Cer-Vit from 250 to 1050 A

    NASA Technical Reports Server (NTRS)

    Osantowski, J. F.

    1974-01-01

    The reflectance for a bowl-feed polished Cer-Vit sample was measured at nine wavelengths and five angles of incidence from 15 to 85 deg. Optical constants were derived by the reflectance-vs-angle-of-incidence method and compared to previously reported values for ultralow-expansion fused silica and several other glasses. Surface-roughness corrections of the reflectance data and optical constants are discussed.

Top