Monitoring total mixed rations and feed delivery systems.
Oelberg, Thomas J; Stone, William
2014-11-01
This article is intended to give practitioners a method to evaluate total mixed ration (TMR) consistency and to give them practical solutions to improve TMR consistency that will improve cattle performance and health. Practitioners will learn how to manage the variation in moisture and nutrients that exists in haylage and corn silage piles and in bales of hay, and methods to reduce variation in the TMR mixing and delivery process. Copyright © 2014 Elsevier Inc. All rights reserved.
A Real Options Approach to Quantity and Cost Optimization for Lifetime and Bridge Buys of Parts
2015-04-30
fixed EOS of 40 years and a fixed WACC of 3%, decreases to a minimum and then increases. The minimum of this curve gives the optimum buy size for...considered in both analyses. For a 3% WACC , as illustrated in Figure 9(a), the DES method gives an optimum buy size range of 2,923–3,191 with an average...Hence, both methods are consistent in determining the optimum lifetime/bridge buy size. To further verify this consistency, other WACC values
ERIC Educational Resources Information Center
Molenaar, Peter C. M.; Nesselroade, John R.
1998-01-01
Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…
Staining Methods for Normal and Regenerative Myelin in the Nervous System.
Carriel, Víctor; Campos, Antonio; Alaminos, Miguel; Raimondo, Stefania; Geuna, Stefano
2017-01-01
Histochemical techniques enable the specific identification of myelin by light microscopy. Here we describe three histochemical methods for the staining of myelin suitable for formalin-fixed and paraffin-embedded materials. The first method is conventional luxol fast blue (LFB) method which stains myelin in blue and Nissl bodies and mast cells in purple. The second method is a LBF-based method called MCOLL, which specifically stains the myelin as well the collagen fibers and cells, giving an integrated overview of the histology and myelin content of the tissue. Finally, we describe the osmium tetroxide method, which consist in the osmication of previously fixed tissues. Osmication is performed prior the embedding of tissues in paraffin giving a permanent positive reaction for myelin as well as other lipids present in the tissue.
Design and application of a small size SAFT imaging system for concrete structure
NASA Astrophysics Data System (ADS)
Shao, Zhixue; Shi, Lihua; Shao, Zhe; Cai, Jian
2011-07-01
A method of ultrasonic imaging detection is presented for quick non-destructive testing (NDT) of concrete structures using synthesized aperture focusing technology (SAFT). A low cost ultrasonic sensor array consisting of 12 market available low frequency ultrasonic transducers is designed and manufactured. A channel compensation method is proposed to improve the consistency of different transducers. The controlling devices for array scan as well as the virtual instrument for SAFT imaging are designed. In the coarse scan mode with the scan step of 50 mm, the system can quickly give an image display of a cross section of 600 mm (L) × 300 mm (D) by one measurement. In the refined scan model, the system can reduce the scan step and give an image display of the same cross section by moving the sensor array several times. Experiments on staircase specimen, concrete slab with embedded target, and building floor with underground pipe line all verify the efficiency of the proposed method.
Consistent forcing scheme in the cascaded lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Fei, Linlin; Luo, Kai Hong
2017-11-01
In this paper, we give an alternative derivation for the cascaded lattice Boltzmann method (CLBM) within a general multiple-relaxation-time (MRT) framework by introducing a shift matrix. When the shift matrix is a unit matrix, the CLBM degrades into an MRT LBM. Based on this, a consistent forcing scheme is developed for the CLBM. The consistency of the nonslip rule, the second-order convergence rate in space, and the property of isotropy for the consistent forcing scheme is demonstrated through numerical simulations of several canonical problems. Several existing forcing schemes previously used in the CLBM are also examined. The study clarifies the relation between MRT LBM and CLBM under a general framework.
Measurement of acoustic attenuation in South Pole ice
NASA Astrophysics Data System (ADS)
IceCube Collaboration; Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Böser, S.; Botner, O.; Bradley, L.; Braun, J.; Buitink, S.; Carson, M.; Chirkin, D.; Christy, B.; Clem, J.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; de Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; Deyoung, T.; Díaz-Vélez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Geisler, M.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Gustafsson, L.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kemming, N.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Knops, S.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Lehmann, R.; Lennarz, D.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; Matusik, M.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Paul, L.; Pérez de Los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Schatto, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tamburro, A.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; van Santen, J.; Voigt, B.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; IceCube Collaboration
2011-01-01
Using the South Pole Acoustic Test Setup (SPATS) and a retrievable transmitter deployed in holes drilled for the IceCube experiment, we have measured the attenuation of acoustic signals by South Pole ice at depths between 190 m and 500 m. Three data sets, using different acoustic sources, have been analyzed and give consistent results. The method with the smallest systematic uncertainties yields an amplitude attenuation coefficient α = 3.20 ± 0.57 km-1 between 10 and 30 kHz, considerably larger than previous theoretical estimates. Expressed as an attenuation length, the analyses give a consistent result for λ ≡ 1/α of ˜300 m with 20% uncertainty. No significant depth or frequency dependence has been found.
Measuring, modeling, and minimizing capacitances in heterojunction bipolar transistors
NASA Astrophysics Data System (ADS)
Anholt, R.; Bozada, C.; Dettmer, R.; Via, D.; Jenkins, T.; Barrette, J.; Ebel, J.; Havasy, C.; Sewell, J.; Quach, T.
1996-07-01
We demonstrate methods to separate junction and pad capacitances from on-wafer S-parameter measurements of HBTs with different areas and layouts. The measured junction capacitances are in good agreement with models, indicating that large-area devices are suitable for monitoring vendor epi-wafer doping. Measuring open HBTs does not give the correct pad capacitances. Finally, a capacitance comparison for a variety of layouts shows that bar-devices consistently give smaller base-collector values than multiple dot HBTs.
NASA Technical Reports Server (NTRS)
Bennett, Floyd V.; Yntema, Robert T.
1959-01-01
Several approximate procedures for calculating the bending-moment response of flexible airplanes to continuous isotropic turbulence are presented and evaluated. The modal methods (the mode-displacement and force-summation methods) and a matrix method (segmented-wing method) are considered. These approximate procedures are applied to a simplified airplane for which an exact solution to the equation of motion can be obtained. The simplified airplane consists of a uniform beam with a concentrated fuselage mass at the center. Airplane motions are limited to vertical rigid-body translation and symmetrical wing bending deflections. Output power spectra of wing bending moments based on the exact transfer-function solutions are used as a basis for the evaluation of the approximate methods. It is shown that the force-summation and the matrix methods give satisfactory accuracy and that the mode-displacement method gives unsatisfactory accuracy.
1985-05-30
consisting of quarterwave layers by detecting the -- extrema of transmission or reflectance at a particular wavelength. This method is extremely stable for the...technique, which is based on an envelope method , and gives some experimental *results. L"( iL -2- I. Introduction The refractive index and the...constants determination :ecnnique by computer simulation, we have applied the method to various layers of titanium dioxide. This technique can then
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yost, Shane R.; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720
2016-08-07
In this paper we introduce two size consistent forms of the non-orthogonal configuration interaction with second-order Møller-Plesset perturbation theory method, NOCI-MP2. We show that the original NOCI-MP2 formulation [S. R. Yost, T. Kowalczyk, and T. VanVoorh, J. Chem. Phys. 193, 174104 (2013)], which is a perturb-then-diagonalize multi-reference method, is not size consistent. We also show that this causes significant errors in large systems like the linear acenes. By contrast, the size consistent versions of the method give satisfactory results for singlet and triplet excited states when compared to other multi-reference methods that include dynamic correlation. For NOCI-MP2 however, the numbermore » of required determinants to yield similar levels of accuracy is significantly smaller. These results show the promise of the NOCI-MP2 method, though work still needs to be done in creating a more consistent black-box approach to computing the determinants that comprise the many-electron NOCI basis.« less
NASA Astrophysics Data System (ADS)
Hinckley, Sarah; Parada, Carolina; Horne, John K.; Mazur, Michael; Woillez, Mathieu
2016-10-01
Biophysical individual-based models (IBMs) have been used to study aspects of early life history of marine fishes such as recruitment, connectivity of spawning and nursery areas, and marine reserve design. However, there is no consistent approach to validating the spatial outputs of these models. In this study, we hope to rectify this gap. We document additions to an existing individual-based biophysical model for Alaska walleye pollock (Gadus chalcogrammus), some simulations made with this model and methods that were used to describe and compare spatial output of the model versus field data derived from ichthyoplankton surveys in the Gulf of Alaska. We used visual methods (e.g. distributional centroids with directional ellipses), several indices (such as a Normalized Difference Index (NDI), and an Overlap Coefficient (OC), and several statistical methods: the Syrjala method, the Getis-Ord Gi* statistic, and a geostatistical method for comparing spatial indices. We assess the utility of these different methods in analyzing spatial output and comparing model output to data, and give recommendations for their appropriate use. Visual methods are useful for initial comparisons of model and data distributions. Metrics such as the NDI and OC give useful measures of co-location and overlap, but care must be taken in discretizing the fields into bins. The Getis-Ord Gi* statistic is useful to determine the patchiness of the fields. The Syrjala method is an easily implemented statistical measure of the difference between the fields, but does not give information on the details of the distributions. Finally, the geostatistical comparison of spatial indices gives good information of details of the distributions and whether they differ significantly between the model and the data. We conclude that each technique gives quite different information about the model-data distribution comparison, and that some are easy to apply and some more complex. We also give recommendations for a multistep process to validate spatial output from IBMs.
Metal–organic complexation in the marine environment
Luther, George W; Rozan, Timothy F; Witter, Amy; Lewis, Brent
2001-01-01
We discuss the voltammetric methods that are used to assess metal–organic complexation in seawater. These consist of titration methods using anodic stripping voltammetry (ASV) and cathodic stripping voltammetry competitive ligand experiments (CSV-CLE). These approaches and a kinetic approach using CSV-CLE give similar information on the amount of excess ligand to metal in a sample and the conditional metal ligand stability constant for the excess ligand bound to the metal. CSV-CLE data using different ligands to measure Fe(III) organic complexes are similar. All these methods give conditional stability constants for which the side reaction coefficient for the metal can be corrected but not that for the ligand. Another approach, pseudovoltammetry, provides information on the actual metal–ligand complex(es) in a sample by doing ASV experiments where the deposition potential is varied more negatively in order to destroy the metal–ligand complex. This latter approach gives concentration information on each actual ligand bound to the metal as well as the thermodynamic stability constant of each complex in solution when compared to known metal–ligand complexes. In this case the side reaction coefficients for the metal and ligand are corrected. Thus, this method may not give identical information to the titration methods because the excess ligand in the sample may not be identical to some of the actual ligands binding the metal in the sample. PMID:16759421
On physical optics for calculating scattering from coated bodies
NASA Technical Reports Server (NTRS)
Baldauf, J.; Lee, S. W.; Ling, H.; Chou, R.
1989-01-01
The familiar physical optics (PO) approximation is no longer valid when the perfectly conducting scatterer is coated with dielectric material. This paper reviews several possible PO formulations. By comparing the PO formulation with the moment method solution based on the impedance boundary condition for the case of the coated cone-sphere, a PO formulation using both electric and magnetic currents consistently gives the best numerical results. Comparisons of the exact moment method with the PO formulations using the impedance boundary condition and the PO formulation using the Fresnel reflection coefficient for the case of scattering from the cone-ellipsoid demonstrate that the Fresnel reflection coefficient gives the best numerical results in general.
Fleischhaker, R; Krauss, N; Schättiger, F; Dekorsy, T
2013-03-25
We study the comparability of the two most important measurement methods used for the characterization of semiconductor saturable absorber mirrors (SESAMs). For both methods, single-pulse spectroscopy (SPS) and pump-probe spectroscopy (PPS), we analyze in detail the time-dependent saturation dynamics inside a SESAM. Based on this analysis, we find that fluence-dependent PPS at complete spatial overlap and zero time delay is equivalent to SPS. We confirm our findings experimentally by comparing data from SPS and PPS of two samples. We show how to interpret this data consistently and we give explanations for possible deviations.
Theoretical studies of electronically excited states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besley, Nicholas A.
2014-10-06
Time-dependent density functional theory is the most widely used quantum chemical method for studying molecules in electronically excited states. However, excited states can also be computed within Kohn-Sham density functional theory by exploiting methods that converge the self-consistent field equations to give excited state solutions. The usefulness of single reference self-consistent field based approaches for studying excited states is demonstrated by considering the calculation of several types of spectroscopy including the infrared spectroscopy of molecules in an electronically excited state, the rovibrational spectrum of the NO-Ar complex, core electron binding energies and the emission spectroscopy of BODIPY in water.
Bax, A C; Shawler, P M; Blackmon, D L; DeGrace, E W; Wolraich, M L
2016-09-01
Factors surrounding pediatricians' parenting advice and training on parenting during residency have not been well studied. The primary purpose of this study was to examine pediatric residents' self-reported experiences giving parenting advice and explore the relationship between parenting advice given and types of parenting residents received as children. Thirteen OUHSC pediatric residents were individually interviewed to examine experiences being parented and giving parenting advice. Phenomenological methods were used to explicate themes and secondary analyses explored relationships of findings based upon Baumrind's parenting styles (authoritative, authoritarian, permissive). While childhood experiences were not specifically correlated to the parenting advice style of pediatric residents interviewed, virtually all reported relying upon childhood experiences to generate their advice. Those describing authoritative parents reported giving more authoritative advice while others reported more variable advice. Core interview themes related to residents' parenting advice included anxiety about not being a parent, varying advice based on families' needs, and emphasis of positive interactions and consistency. Themes related to how residents were parented included discipline being a learning process for their parents and recalling that their parents always had expectations, yet always loved them. Pediatric residents interviewed reported giving family centered parenting advice with elements of positive interactions and consistency, but interviews highlighted many areas of apprehension residents have around giving parenting advice. Our study suggests that pediatric residents may benefit from more general educational opportunities to develop the content of their parenting advice, including reflecting on any impact from their own upbringing.
Potential application of the consistency approach for vaccine potency testing.
Arciniega, J; Sirota, L A
2012-01-01
The Consistency Approach offers the possibility of reducing the number of animals used for a potency test. However, it is critical to assess the effect that such reduction may have on assay performance. Consistency of production, sometimes referred to as consistency of manufacture or manufacturing, is an old concept implicit in regulation, which aims to ensure the uninterrupted release of safe and effective products. Consistency of manufacture can be described in terms of process capability, or the ability of a process to produce output within specification limits. For example, the standard method for potency testing of inactivated rabies vaccines is a multiple-dilution vaccination challenge test in mice that gives a quantitative, although highly variable estimate. On the other hand, a single-dilution test that does not give a quantitative estimate, but rather shows if the vaccine meets the specification has been proposed. This simplified test can lead to a considerable reduction in the number of animals used. However, traditional indices of process capability assume that the output population (potency values) is normally distributed, which clearly is not the case for the simplified approach. Appropriate computation of capability indices for the latter case will require special statistical considerations.
NASA Astrophysics Data System (ADS)
Lahay, R. R.; Misrun, S.; Sipayung, R.
2018-02-01
Cocoa is plant which it’s seed character is recalcitrant. Giving PEG and using various of storage containers was hoped to increase storage capacity of cocoa seeds as long as period of saving. The reseach was aimed to identify the storage capacity of cocoa seeds through giving PEG in the various of storage containers. Research took place in Hataram Jawa II, Kabupaten Simalungun, Propinsi Sumatera Utara, Indonesia. The method of this research is spit-split plot design with 3 replication. Storage period was put on main plot which was consisted of 4 level, PEG concentration was put on sub plot, consisted of 4 level and storage container was put on the sub sub plot consisted of 3 types. The results showed that until 4 days at storage with 45 % PEG concentration at all storage container, percentage of seed germination at storage can be decreased to be 2.90 %, and can be defensed until 16 days with 45 % PEG concentration at perforated plastic storage container. Percentage of molded seeds and seed moisture content were increased with added period of storage but seed moisture content was increased until 12 days at storage and was decreased at 16 days in storage.
Coty, Jean-Baptiste; Noiray, Magali; Vauthier, Christine
2018-04-26
A Surface Plasmon Resonance chip (SPR) was developed to study the activation of complement system triggered by nanomaterials in contact with human serum, which is an important concern today to warrant safety of nanomedicines. The developed chip was tested for its specificity in complex medium and its longevity of use. It was then employed to assess the release of complement fragments upon incubation of nanoparticles in serum. A comparison was made with other current methods assessing complement activation (μC-IE, ELISA). The SPR chip was found to give a consistent response for C3a release upon activation by nanoparticles. Results were similar to those obtained by μC-IE. However, ELISA detection of iC3b fragments showed an explained high non-specific background. The impact of sample preparation preceding the analysis was assessed with the newly develop SPR method. The removal of nanoparticles before analysis showed an important modification in the obtained response, possibly leading to false negative results. The SPR chip developed in this work allows for an automated assessment of complement activation triggered by nanoparticles with possibility of multiplexed analysis. The design of the chip proved to give consistent results of complement activation by nanoparticles.
Sonntag, Axel; Zizzo, Daniel John
2015-01-01
We present the results of an experiment that (a) shows the usefulness of screening out drop-outs and (b) tests whether different methods of payment and reminder intervals affect charitable giving. Following a lab session, participants could make online donations to charity for a total duration of three months. Our procedure justifying the exclusion of drop-outs consists in requiring participants to collect payments in person flexibly and as known in advance and as highlighted to them later. Our interpretation is that participants who failed to collect their positive payments under these circumstances are likely not to satisfy dominance. If we restrict the sample to subjects who did not drop out, but not otherwise, reminders significantly increase the overall amount of charitable giving. We also find that weekly reminders are no more effective than monthly reminders in increasing charitable giving, and that, in our three months duration experiment, standing orders do not increase giving relative to one-off donations. PMID:26252524
Do Students in Secondary Education Manifest Sexist Attitudes?
ERIC Educational Resources Information Center
Pozo, Carmen; Martos, Maria J.; Morillejo, Enrique Alonso
2010-01-01
Introduction: Sexism and sexist attitudes can give rise to gender violence. It is therefore important to analyze these variables at an early age (in secondary school classrooms); from this analysis we will have a basis for intervention. Method: The study sample consists of 962 secondary school students. Measuring instruments were used to assess…
Mixed-Methods Study of Online and Written Organic Chemistry Homework
ERIC Educational Resources Information Center
Malik, Kinza; Martinez, Nylvia; Romero, Juan; Schubel, Skyler; Janowicz, Philip A.
2014-01-01
Connect for organic chemistry is an online learning tool that gives students the opportunity to learn about all aspects of organic chemistry through the ease of the digital world. This research project consisted of two fundamental questions. The first was to discover whether there was a difference in undergraduate organic chemistry content…
Structural and functional aspects of C1-inhibitor.
Bos, Ineke G A; Hack, C Erik; Abrahams, Jan Pieter
2002-09-01
C1-Inh is a serpin that inhibits serine proteases from the complement and the coagulation pathway. C1-Inh consists of a serpin domain and a unique N-terminal domain and is heavily glycosylated. Non-functional mutants of C1-Inh can give insight into the inhibitory mechanism of C1-Inh. This review describes a novel 3D model of C1-Inh, based on a newly developed homology modelling method. This model gives insight into a possible potentiation mechanism of C1-Inh and based on this model the essential residues for efficient inhibition by C1-Inh are discussed.
NASA Technical Reports Server (NTRS)
Saltsman, J. F.; Halford, G. R.
1979-01-01
The method of strainrange partitioning is used to predict the cyclic lives of the Metal Properties Council's long time creep-fatigue interspersion tests of several steel alloys. Comparisons are made with predictions based upon the time- and cycle-fraction approach. The method of strainrange partitioning is shown to give consistently more accurate predictions of cyclic life than is given by the time- and cycle-fraction approach.
An eigenvalue approach to quantum plasmonics based on a self-consistent hydrodynamics method
NASA Astrophysics Data System (ADS)
Ding, Kun; Chan, C. T.
2018-02-01
Plasmonics has attracted much attention not only because it has useful properties such as strong field enhancement, but also because it reveals the quantum nature of matter. To handle quantum plasmonics effects, ab initio packages or empirical Feibelman d-parameters have been used to explore the quantum correction of plasmonic resonances. However, most of these methods are formulated within the quasi-static framework. The self-consistent hydrodynamics model offers a reliable approach to study quantum plasmonics because it can incorporate the quantum effect of the electron gas into classical electrodynamics in a consistent manner. Instead of the standard scattering method, we formulate the self-consistent hydrodynamics method as an eigenvalue problem to study quantum plasmonics with electrons and photons treated on the same footing. We find that the eigenvalue approach must involve a global operator, which originates from the energy functional of the electron gas. This manifests the intrinsic nonlocality of the response of quantum plasmonic resonances. Our model gives the analytical forms of quantum corrections to plasmonic modes, incorporating quantum electron spill-out effects and electrodynamical retardation. We apply our method to study the quantum surface plasmon polariton for a single flat interface.
Veltri, Lucia; Giofrè, Salvatore V; Devo, Perry; Romeo, Roberto; Dobbs, Adrian P; Gabriele, Bartolo
2018-02-02
A novel carbonylative approach to the synthesis of functionalized 1H-benzo[d]imidazo[1,2-a]imidazoles is presented. The method consists of the oxidative aminocarbonylation of N-substituted-1-(prop-2-yn-1-yl)-1H-benzo[d]imidazol-2-amines, carried out in the presence of secondary nucleophilic amines, to give the corresponding alkynylamide intermediates, followed by in situ conjugated addition and double-bond isomerization, to give 2-(1-alkyl-1H-benzo[d]imidazo[1,2-a]imidazol-2-yl)acetamides. Products were obtained in good to excellent yields (64-96%) and high turnover numbers (192-288 mol of product per mol of catalyst) under relatively mild conditions (100 °C under 20 atm of a 4:1 mixture of CO-air), using a simple catalytic system, consisting of PdI 2 (0.33 mol %) in conjunction with KI (0.33 equiv).
NASA Astrophysics Data System (ADS)
Liu, Zhaosen; Ian, Hou
2016-01-01
We give a theoretical study on the magnetic properties of monolayer nanodisks with both Heisenberg exchange and Dzyaloshinsky-Moriya (DM) interactions. In particular, we survey the magnetic effects caused by anisotropy, external magnetic field, and disk size when DM interaction is present by means of a new quantum simulation method facilitated by a self-consistent algorithm based on mean field theory. This computational approach finds that uniaxial anisotropy and transversal magnetic field enhance the net magnetization as well as increase the transition temperature of the vortical phase while preserving the chiralities of the swirly magnetic structures, whereas when the strength of DM interaction is sufficiently strong for a given disk size, magnetic domains appear within the circularly bounded region, which vanish and give in to a single vortex when a transversal magnetic field is applied. The latter confirms the magnetic skyrmions induced by the magnetic field as observed in the experiments.
How Do Detergents Work? A Qualitative Assay to Measure Amylase Activity
ERIC Educational Resources Information Center
Novo, M. Teresa; Casanoves, Marina; Garcia-Vallvé, Santi; Pujadas, Gerard; Mulero, Miquel; Valls, Cristina
2016-01-01
We present a practical activity focusing on two main goals: to give learners the opportunity to experience how the scientific method works and to increase their knowledge about enzymes in everyday situations. The exercise consists of determining the amylase activity of commercial detergents. The methodology is based on a qualitative assay using a…
Simplified DFT methods for consistent structures and energies of large systems
NASA Astrophysics Data System (ADS)
Caldeweyher, Eike; Gerit Brandenburg, Jan
2018-05-01
Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.
Size of Self-Gravity Wakes from Cassini UVIS Tracking Occultations and Ring Transparency Statistics
NASA Astrophysics Data System (ADS)
Esposito, Larry W.; Rehnberg, Morgan; Colwell, Joshua E.; Sremcevic, Miodrag
2017-10-01
We compare two methods for determining the size of self-gravity wakes in Saturn’s rings. Analysis of gaps seen in UVIS occultations gives a power law distribution from 10-100m (Rehnberg etal 2017). Excess variance from UVIS occultations can be related to characteristic clump widths, a method which extends the work of Showalter and Nicholson (1990) to more arbitrary shadow distributions. In the middle A ring, we use results from Colwell etal (2017) for the variance and results from Jerousek etal (2016) for the relative size of gaps and wakes to estimate the wake width consistent with the excess variance observed there. Our method gives:W= sqrt (A) * E/T2 * (1+ S/W)Where A is the area observed by UVIS in an integration period, E is the measured excess variance above Poisson statistics, T is the mean transparency, and S and W are the separation and width of self-gravity wakes in the granola bar model of Colwell etal (2006). We find:W ~ 10m and infer the wavelength of the fastest growing instabilityLambda(TOOMRE) = S + W ~ 30m.This is consistent with the calculation of the Toomre wavelength from the surface mass density of the A ring, and with the highest resolution UVIS star occultations.
Size of Self-Gravity Wakes from Cassini UVIS Tracking Occultations and Ring Transparency Statistics
NASA Astrophysics Data System (ADS)
Esposito, L. W.; Rehnberg, M.; Colwell, J. E.; Sremcevic, M.
2017-12-01
We compare two methods for determining the size of self-gravity wakes in Saturn's rings. Analysis of gaps seen in UVIS occultations gives a power law distribution from 10-100m (Rehnberg etal 2017). Excess variance from UVIS occultations can be related to characteristic clump widths, a method which extends the work of Showalter and Nicholson (1990) to more arbitrary shadow distributions. In the middle A ring, we use results from Colwell etal (2017) for the variance and results from Jerousek etal (2016) for the relative size of gaps and wakes to estimate the wake width consistent with the excess variance observed there. Our method gives: W= sqrt (A) * E/T2 * (1+ S/W)Where A is the area observed by UVIS in an integration period, E is the measured excess variance above Poisson statistics, T is the mean transparency, and S and W are the separation and width of self-gravity wakes in the granola bar model of Colwell etal (2006). We find: W 10m and infer the wavelength of the fastest growing instability lamdaT = S + W 30m. This is consistent with the calculation of the Toomre wavelength from the surface mass density of the A ring, and with the highest resolution UVIS star occultations.
Conversion of radius of curvature to power (and vice versa)
NASA Astrophysics Data System (ADS)
Wickenhagen, Sven; Endo, Kazumasa; Fuchs, Ulrike; Youngworth, Richard N.; Kiontke, Sven R.
2015-09-01
Manufacturing optical components relies on good measurements and specifications. One of the most precise measurements routinely required is the form accuracy. In practice, form deviation from the ideal surface is effectively low frequency errors, where the form error most often accounts for no more than a few undulations across a surface. These types of errors are measured in a variety of ways including interferometry and tactile methods like profilometry, with the latter often being employed for aspheres and general surface shapes such as freeforms. This paper provides a basis for a correct description of power and radius of curvature tolerances, including best practices and calculating the power value with respect to the radius deviation (and vice versa) of the surface form. A consistent definition of the sagitta is presented, along with different cases in manufacturing that are of interest to fabricators and designers. The results make clear how the definitions and results should be documented, for all measurement setups. Relationships between power and radius of curvature are shown that allow specifying the preferred metric based on final accuracy and measurement method. Results shown include all necessary equations for conversion to give optical designers and manufacturers a consistent and robust basis for decision-making. The paper also gives guidance on preferred methods for different scenarios for surface types, accuracy required, and metrology methods employed.
Nuclear constraints on the age of the universe
NASA Technical Reports Server (NTRS)
Schramm, D. N.
1982-01-01
A review is made of how one can use nuclear physics to put rather stringent limits on the age of the universe and thus the cosmic distance scale. The age can be estimated to a fair degree of accuracy. No single measurement of the time since the Big Bang gives a specific, unambiguous age. There are several methods that together fix the age with surprising precision. In particular, there are three totally independent techniques for estimating an age and a fourth technique which involves finding consistency of the other three in the framework of the standard Big Bang cosmological model. The three independent methods are: cosmological dynamics, the age of the oldest stars, and radioactive dating. This paper concentrates on the third of the three methods, and the consistency technique.
Charmonium-nucleon interactions from the time-dependent HAL QCD method
NASA Astrophysics Data System (ADS)
Sugiura, Takuya; Ikeda, Yoichi; Ishii, Noriyoshi
2018-03-01
The charmonium-nucleon effective central interactions have been computed by the time-dependent HAL QCD method. This gives an updated result of a previous study based on the time-independent method, which is now known to be problematic because of the difficulty in achieving the ground-state saturation. We discuss that the result is consistent with the heavy quark symmetry. No bound state is observed from the analysis of the scattering phase shift; however, this shall lead to a future search of the hidden-charm pentaquarks by considering channel-coupling effects.
Multicritical points for spin-glass models on hierarchical lattices.
Ohzeki, Masayuki; Nishimori, Hidetoshi; Berker, A Nihat
2008-06-01
The locations of multicritical points on many hierarchical lattices are numerically investigated by the renormalization group analysis. The results are compared with an analytical conjecture derived by using the duality, the gauge symmetry, and the replica method. We find that the conjecture does not give the exact answer but leads to locations slightly away from the numerically reliable data. We propose an improved conjecture to give more precise predictions of the multicritical points than the conventional one. This improvement is inspired by a different point of view coming from the renormalization group and succeeds in deriving very consistent answers with many numerical data.
Search automation of the generalized method of device operational characteristics improvement
NASA Astrophysics Data System (ADS)
Petrova, I. Yu; Puchkova, A. A.; Zaripova, V. M.
2017-01-01
The article presents brief results of analysis of existing search methods of the closest patents, which can be applied to determine generalized methods of device operational characteristics improvement. There were observed the most widespread clustering algorithms and metrics for determining the proximity degree between two documents. The article proposes the technique of generalized methods determination; it has two implementation variants and consists of 7 steps. This technique has been implemented in the “Patents search” subsystem of the “Intellect” system. Also the article gives an example of the use of the proposed technique.
Renaudin, Isabelle; Poliakoff, Françoise
2017-01-01
A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of “Flavescence dorée” (FD) phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes’ theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007), Pelletier (2009) and under patent oligonucleotides) achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper and their combination can be applied to many other studies concerning plant pathogens and other disciplines that use qualitative detection methods. PMID:28384335
Chabirand, Aude; Loiseau, Marianne; Renaudin, Isabelle; Poliakoff, Françoise
2017-01-01
A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of "Flavescence dorée" (FD) phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes' theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007), Pelletier (2009) and under patent oligonucleotides) achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper and their combination can be applied to many other studies concerning plant pathogens and other disciplines that use qualitative detection methods.
Electronic structure and magnetic ordering in manganese hydride
NASA Astrophysics Data System (ADS)
Magnitskaya, M. V.; Kulikov, N. I.
1991-03-01
The self-consistent electron energy bands of antiferromagnetic (AFM) and non-magnetic manganese hydride are calculated using the linear muffintin orbital method (LMTO). The calculated values of equilibrium volume and of magnetic moment on the manganese site are in good agreement with experiment. The Fermi surface of paramagnetic MnH contains two nesting parts, and their superposition gives rise to AFM gap.
Methods for consistent forewarning of critical events across multiple data channels
Hively, Lee M.
2006-11-21
This invention teaches further method improvements to forewarn of critical events via phase-space dissimilarity analysis of data from biomedical equipment, mechanical devices, and other physical processes. One improvement involves conversion of time-serial data into equiprobable symbols. A second improvement is a method to maximize the channel-consistent total-true rate of forewarning from a plurality of data channels over multiple data sets from the same patient or process. This total-true rate requires resolution of the forewarning indications into true positives, true negatives, false positives and false negatives. A third improvement is the use of various objective functions, as derived from the phase-space dissimilarity measures, to give the best forewarning indication. A fourth improvement uses various search strategies over the phase-space analysis parameters to maximize said objective functions. A fifth improvement shows the usefulness of the method for various biomedical and machine applications.
Nuclear constraints on the age of the universe
NASA Technical Reports Server (NTRS)
Schramm, D. N.
1983-01-01
A review is made of how one can use nuclear physics to put rather stringent limits on the age of the universe and thus the cosmic distance scale. The age can be estimated to a fair degree of accuracy. No single measurement of the time since the Big Bang gives a specific, unambiguous age. There are several methods that together fix the age with surprising precision. In particular, there are three totally independent techniques for estimating an age and a fourth technique which involves finding consistency of the other three in the framework of the standard Big Bang cosmological model. The three independent methods are: cosmological dynamics, the age of the oldest stars, and radioactive dating. This paper concentrates on the third of the three methods, and the consistency technique. Previously announced in STAR as N83-34868
Wentzel-Kramers-Brillouin method in the Bargmann representation. [of quantum mechanics
NASA Technical Reports Server (NTRS)
Voros, A.
1989-01-01
It is demonstrated that the Bargmann representation of quantum mechanics is ideally suited for semiclassical analysis, using as an example the WKB method applied to the bound-state problem in a single well of one degree of freedom. For the harmonic oscillator, this WKB method trivially gives the exact eigenfunctions in addition to the exact eigenvalues. For an anharmonic well, a self-consistent variational choice of the representation greatly improves the accuracy of the semiclassical ground state. Also, a simple change of scale illuminates the relationship of semiclassical versus linear perturbative expansions, allowing a variety of multidimensional extensions.
NΩ interaction from two approaches in lattice QCD
NASA Astrophysics Data System (ADS)
Etminan, Faisal; Firoozabadi, Mohammad Mehdi
2014-10-01
We compare the standard finite volume method by Lüscher with the potential method by HAL QCD collaboration, by calculating the ground state energy of N(nucleon)-Ω(Omega) system in 5 S2 channel. We employ 2+1 flavor full QCD configurations on a (1.9 fm)3×3.8 fm lattice at the lattice spacing a≃0.12 fm, whose ud(s) quark mass corresponds to mπ = 875(1) (mK = 916(1)) MeV. We have found that both methods give reasonably consistent results that there is one NΩ bound state at this parameter.
Aksu, Yaman; Miller, David J; Kesidis, George; Yang, Qing X
2010-05-01
Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers." For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom--the hyperplane's intercept and its squared 2-norm--with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results.
A PFI mill can be used to predict biomechanical pulp strength properties
Gary F. Leatham; Gary C. Myers
1990-01-01
Recently, we showed that a biomechanical pulping process in which aspen chips are pretreated with a white-rot fungus can give energy savings and can increase paper sheet strength. To optimize this process, we need more efficient ways to evaluate the fungal treatments. Here, we examine a method that consists of treating coarse refiner mechanical pulp, refining in a PFI...
Exploiting Non-sequence Data in Dynamic Model Learning
2013-10-01
For our experiments here and in Section 3.5, we implement the proposed algorithms in MATLAB and use the maximum directed spanning tree solver...embarrassingly parallelizable, whereas PM’s maximum directed spanning tree procedure is harder to parallelize. In this experiment, our MATLAB ...some estimation problems, this approach is able to give unique and consistent estimates while the maximum- likelihood method gets entangled in
The acoustical design of vehicles-a challenge for qualitative evaluation
NASA Astrophysics Data System (ADS)
Schulte-Fortkamp, Brigitte; Genuit, Klaus; Fiebig, Andre
2005-09-01
Whenever the acoustical design of vehicles is explored, the crucial question about the appropriate method of evaluation arises. Research shows that not only acoustic but also non-acoustic parameters have a major influence on the way sounds are evaluated. Therefore, new methods of evaluation have to be implemented. Methods are needed which give the opportunity to test the quality of the given ambience and to register the effects and evaluations in their functional interdependence as well as the influence of personal and contextual factors. Moreover, new methods have to give insight into processes of evaluation and their contextual parameters. In other words, the task of evaluating acoustical ambiences consists of designating a set of social, psychological, and cultural conditions which are important to determine particular individual and collective behavior, attitudes, and also emotions relative to the given ambience. However, no specific recommendations exist yet which comprise particular descriptions of how to assess those specific sound effects. That is why there is a need to develop alternative methods of evaluation with whose help effects of acoustical ambiences can be better predicted. A method of evaluation will be presented which incorporates a new sensitive approach for the evaluation of vehicle sounds.
Westwood, A; Bullock, D G; Whitehead, T P
1986-01-01
Hexokinase methods for serum glucose assay appeared to give slightly but consistently higher inter-laboratory coefficients of variation than all methods combined in the UK External Quality Assessment Scheme; their performance over a two-year period was therefore compared with that for three groups of glucose oxidase methods. This assessment showed no intrinsic inferiority in the hexokinase method. The greater variation may be due to the more heterogeneous group of instruments, particularly discrete analysers, on which the method is used. The Beckman Glucose Analyzer and Astra group (using a glucose oxidase method) showed the least inter-laboratory variability but also the lowest mean value. No comment is offered on the absolute accuracy of any of the methods.
Exploring the dynamics of balance data — movement variability in terms of drift and diffusion
NASA Astrophysics Data System (ADS)
Gottschall, Julia; Peinke, Joachim; Lippens, Volker; Nagel, Volker
2009-02-01
We introduce a method to analyze postural control on a balance board by reconstructing the underlying dynamics in terms of a Langevin model. Drift and diffusion coefficients are directly estimated from the data and fitted by a suitable parametrization. The governing parameters are utilized to evaluate balance performance and the impact of supra-postural tasks on it. We show that the proposed method of analysis gives not only self-consistent results but also provides a plausible model for the reconstruction of balance dynamics.
Computed Potential Energy Surfaces and Minimum Energy Pathway for Chemical Reactions
NASA Technical Reports Server (NTRS)
Walch, Stephen P.; Langhoff, S. R. (Technical Monitor)
1994-01-01
Computed potential energy surfaces are often required for computation of such observables as rate constants as a function of temperature, product branching ratios, and other detailed properties. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method with the Dunning correlation consistent basis sets to obtain accurate energetics, gives useful results for a number of chemically important systems. Applications to complex reactions leading to NO and soot formation in hydrocarbon combustion are discussed.
Decontamination and Disposal Methods for Chemical Agents - A Literature Survey
1982-11-01
aqueous copper (I) ammonia complex to give a red copper (1) acetylide precipitate. The precipitate was determined either iodometricaily (sensitivity of I...ppm in decontamination solution) or colorintrically by a copper (11) ammonia complex (12 ppm). Lewisite was also assayed by gas liquid chromatography...to ammonia (then degraded to nitrogen) and carbonate ion. The latter reaction is relatively slow. The reaction may thus be con- sidered to consist of
ERIC Educational Resources Information Center
Hingsburger, Dave
1986-01-01
Ten guidelines for effective use of positive reinforcement as a parenting technique are described. Practical examples are used to illustrate such principles as consistency, immediacy, and specificity in giving praise. A distinction is made between giving reinforcement and giving love. (JW)
Townsend, Leigh; Williams, Richard L.; Anuforom, Olachi; Berwick, Matthew R.; Halstead, Fenella; Hughes, Erik; Stamboulis, Artemis; Oppenheim, Beryl; Gough, Julie; Grover, Liam; Scott, Robert A. H.; Webber, Mark; Peacock, Anna F. A.; Belli, Antonio; Logan, Ann
2017-01-01
The interface between implanted devices and their host tissue is complex and is often optimized for maximal integration and cell adhesion. However, this also gives a surface suitable for bacterial colonization. We have developed a novel method of modifying the surface at the material–tissue interface with an antimicrobial peptide (AMP) coating to allow cell attachment while inhibiting bacterial colonization. The technology reported here is a dual AMP coating. The dual coating consists of AMPs covalently bonded to the hydroxyapatite surface, followed by deposition of electrostatically bound AMPs. The dual approach gives an efficacious coating which is stable for over 12 months and can prevent colonization of the surface by both Gram-positive and Gram-negative bacteria. PMID:28077764
Method for measuring and controlling beam current in ion beam processing
Kearney, Patrick A.; Burkhart, Scott C.
2003-04-29
A method for producing film thickness control of ion beam sputter deposition films. Great improvements in film thickness control is accomplished by keeping the total current supplied to both the beam and suppressor grids of a radio frequency (RF) in beam source constant, rather than just the current supplied to the beam grid. By controlling both currents, using this method, deposition rates are more stable, and this allows the deposition of layers with extremely well controlled thicknesses to about 0.1%. The method is carried out by calculating deposition rates based on the total of the suppressor and beam currents and maintaining the total current constant by adjusting RF power which gives more consistent values.
Prospecting in Ultracool Dwarfs: Measuring the Metallicities of Mid- and Late-M Dwarfs
NASA Astrophysics Data System (ADS)
Mann, Andrew W.; Deacon, Niall R.; Gaidos, Eric; Ansdell, Megan; Brewer, John M.; Liu, Michael C.; Magnier, Eugene A.; Aller, Kimberly M.
2014-06-01
Metallicity is a fundamental parameter that contributes to the physical characteristics of a star. The low temperatures and complex molecules present in M dwarf atmospheres make it difficult to measure their metallicities using techniques that have been commonly used for Sun-like stars. Although there has been significant progress in developing empirical methods to measure M dwarf metallicities over the last few years, these techniques have been developed primarily for early- to mid-M dwarfs. We present a method to measure the metallicity of mid- to late-M dwarfs from moderate resolution (R ~ 2000) K-band (sime 2.2 μm) spectra. We calibrate our formula using 44 wide binaries containing an F, G, K, or early-M primary of known metallicity and a mid- to late-M dwarf companion. We show that similar features and techniques used for early-M dwarfs are still effective for late-M dwarfs. Our revised calibration is accurate to ~0.07 dex for M4.5-M9.5 dwarfs with -0.58 < [Fe/H] < +0.56 and shows no systematic trends with spectral type, metallicity, or the method used to determine the primary star metallicity. We show that our method gives consistent metallicities for the components of M+M wide binaries. We verify that our new formula works for unresolved binaries by combining spectra of single stars. Lastly, we show that our calibration gives consistent metallicities with the Mann et al. study for overlapping (M4-M5) stars, establishing that the two calibrations can be used in combination to determine metallicities across the entire M dwarf sequence.
Recent Development on O(+) - O Collision Frequency and Ionosphere-Thermosphere Coupling
NASA Technical Reports Server (NTRS)
Omidvar, K.; Menard, R.
1999-01-01
The collision frequency between an oxygen atom and its singly charged ion controls the momentum transfer between the ionosphere and the thermosphere. There has been a long standing discrepancy, extending over a decade, between the theoretical and empirical determination of this frequency: the empirical value of this frequency exceeded the theoretical value by a factor of 1.7. Recent improvements in theory were obtained by using accurate oxygen ion-oxygen atom potential energy curves, and partial wave quantum mechanical calculations. We now have applied three independent statistical methods to the observational data, obtained at the MIT/Millstone Hill Observatory, consisting of two sets A and B. These methods give results consistent with each other, and together with the recent theoretical improvements, bring the ratio close to unity, as it should be. The three statistical methods lead to an average for the ratio of the empirical to the theoretical values equal to 0.98, with an uncertainty of +/-8%, resolving the old discrepancy between theory and observation. The Hines statistics, and the lognormal distribution statistics, both give lower and upper bounds for the Set A equal to 0.89 and 1.02, respectively. The related bounds for the Set B are 1.06 and 1.17. The average values of these bounds thus bracket the ideal value of the ratio which should be equal to unity. The main source of uncertainties are errors in the profile of the oxygen atom density, which is of the order of 11%. An alternative method to find the oxygen atom density is being suggested.
NASA Astrophysics Data System (ADS)
Cheng, Wen-Guang; Qiu, De-Qin; Yu, Bo
2017-06-01
This paper is concerned with the fifth-order modified Korteweg-de Vries (fmKdV) equation. It is proved that the fmKdV equation is consistent Riccati expansion (CRE) solvable. Three special form of soliton-cnoidal wave interaction solutions are discussed analytically and shown graphically. Furthermore, based on the consistent tanh expansion (CTE) method, the nonlocal symmetry related to the consistent tanh expansion (CTE) is investigated, we also give the relationship between this kind of nonlocal symmetry and the residual symmetry which can be obtained with the truncated Painlevé method. We further study the spectral function symmetry and derive the Lax pair of the fmKdV equation. The residual symmetry can be localized to the Lie point symmetry of an enlarged system and the corresponding finite transformation group is computed. Supported by National Natural Science Foundation of China under Grant No. 11505090, and Research Award Foundation for Outstanding Young Scientists of Shandong Province under Grant No. BS2015SF009
NASA Technical Reports Server (NTRS)
Jaeckel, Louis A.
1989-01-01
To study the problems of encoding visual images for use with a Sparse Distributed Memory (SDM), I consider a specific class of images- those that consist of several pieces, each of which is a line segment or an arc of a circle. This class includes line drawings of characters such as letters of the alphabet. I give a method of representing a segment of an arc by five numbers in a continuous way; that is, similar arcs have similar representations. I also give methods for encoding these numbers as bit strings in an approximately continuous way. The set of possible segments and arcs may be viewed as a five-dimensional manifold M, whose structure is like a Mobious strip. An image, considered to be an unordered set of segments and arcs, is therefore represented by a set of points in M - one for each piece. I then discuss the problem of constructing a preprocessor to find the segments and arcs in these images, although a preprocessor has not been developed. I also describe a possible extension of the representation.
In vivo chemistry of iofetamine HCl iodine-123 (IMP)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldwin, R.M.; Wu, J.L.
1988-01-01
Application of chemical methods for characterizing the in vivo behavior of iofetamine HCI /sup 123/I (IMP) has shed light on the metabolism of iofetamine in animals and humans. A successful technique consists of ethyl acetate extraction of the metabolites from tissue samples acidified with perchloric acid, separation of the mixture by high performance liquid chromatography, and quantitation of the radioactive components with a sensitive scintillation detector. Metabolism of iofetamine HCI /sup 123/I proceeds sequentially from the N-isopropyl group on the amphetamine side chain. The first step, dealkylation to the primary amine p-iodoamphetamine (PIA), occurs readily in the brain, lungs, andmore » liver; activity in the brain and lungs consists of only IMP and PIA even 24 hr after administration. The rate-limiting step appears to be deamination to give the transitory intermediate p-iodophenylacetone, which is rapidly degraded to p-iodobenzoic acid and conjugated with glycine in the liver to give the end product of metabolism, p-iodohippuric acid, which is excreted through the kidneys in the urine.« less
Automatic Authorship Detection Using Textual Patterns Extracted from Integrated Syntactic Graphs
Gómez-Adorno, Helena; Sidorov, Grigori; Pinto, David; Vilariño, Darnes; Gelbukh, Alexander
2016-01-01
We apply the integrated syntactic graph feature extraction methodology to the task of automatic authorship detection. This graph-based representation allows integrating different levels of language description into a single structure. We extract textual patterns based on features obtained from shortest path walks over integrated syntactic graphs and apply them to determine the authors of documents. On average, our method outperforms the state of the art approaches and gives consistently high results across different corpora, unlike existing methods. Our results show that our textual patterns are useful for the task of authorship attribution. PMID:27589740
Townsend, Leigh; Williams, Richard L; Anuforom, Olachi; Berwick, Matthew R; Halstead, Fenella; Hughes, Erik; Stamboulis, Artemis; Oppenheim, Beryl; Gough, Julie; Grover, Liam; Scott, Robert A H; Webber, Mark; Peacock, Anna F A; Belli, Antonio; Logan, Ann; de Cogan, Felicity
2017-01-01
The interface between implanted devices and their host tissue is complex and is often optimized for maximal integration and cell adhesion. However, this also gives a surface suitable for bacterial colonization. We have developed a novel method of modifying the surface at the material-tissue interface with an antimicrobial peptide (AMP) coating to allow cell attachment while inhibiting bacterial colonization. The technology reported here is a dual AMP coating. The dual coating consists of AMPs covalently bonded to the hydroxyapatite surface, followed by deposition of electrostatically bound AMPs. The dual approach gives an efficacious coating which is stable for over 12 months and can prevent colonization of the surface by both Gram-positive and Gram-negative bacteria. © 2017 The Author(s).
Adaptive identification of vessel's added moments of inertia with program motion
NASA Astrophysics Data System (ADS)
Alyshev, A. S.; Melnikov, V. G.
2018-05-01
In this paper, we propose a new experimental method for determining the moments of inertia of the ship model. The paper gives a brief review of existing methods, a description of the proposed method and experimental stand, test procedures and calculation formulas and experimental results. The proposed method is based on the energy approach with special program motions. The ship model is fixed in a special rack consisting of a torsion element and a set of additional servo drives with flywheels (reactive wheels), which correct the motion. The servo drives with an adaptive controller provide the symmetry of the motion, which is necessary for the proposed identification procedure. The effectiveness of the proposed approach is confirmed by experimental results.
The separate universe approach to soft limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kenton, Zachary; Mulryne, David J., E-mail: z.a.kenton@qmul.ac.uk, E-mail: d.mulryne@qmul.ac.uk
We develop a formalism for calculating soft limits of n -point inflationary correlation functions using separate universe techniques. Our method naturally allows for multiple fields and leads to an elegant diagrammatic approach. As an application we focus on the trispectrum produced by inflation with multiple light fields, giving explicit formulae for all possible single- and double-soft limits. We also investigate consistency relations and present an infinite tower of inequalities between soft correlation functions which generalise the Suyama-Yamaguchi inequality.
A rapid method combining Golgi and Nissl staining to study neuronal morphology and cytoarchitecture.
Pilati, Nadia; Barker, Matthew; Panteleimonitis, Sofoklis; Donga, Revers; Hamann, Martine
2008-06-01
The Golgi silver impregnation technique gives detailed information on neuronal morphology of the few neurons it labels, whereas the majority remain unstained. In contrast, the Nissl staining technique allows for consistent labeling of the whole neuronal population but gives very limited information on neuronal morphology. Most studies characterizing neuronal cell types in the context of their distribution within the tissue slice tend to use the Golgi silver impregnation technique for neuronal morphology followed by deimpregnation as a prerequisite for showing that neuron's histological location by subsequent Nissl staining. Here, we describe a rapid method combining Golgi silver impregnation with cresyl violet staining that provides a useful and simple approach to combining cellular morphology with cytoarchitecture without the need for deimpregnating the tissue. Our method allowed us to identify neurons of the facial nucleus and the supratrigeminal nucleus, as well as assessing cellular distribution within layers of the dorsal cochlear nucleus. With this method, we also have been able to directly compare morphological characteristics of neuronal somata at the dorsal cochlear nucleus when labeled with cresyl violet with those obtained with the Golgi method, and we found that cresyl violet-labeled cell bodies appear smaller at high cellular densities. Our observation suggests that cresyl violet staining is inadequate to quantify differences in soma sizes.
Noise Source Identification in a Reverberant Field Using Spherical Beamforming
NASA Astrophysics Data System (ADS)
Choi, Young-Chul; Park, Jin-Ho; Yoon, Doo-Byung; Kwon, Hyu-Sang
Identification of noise sources, their locations and strengths, has been taken great attention. The method that can identify noise sources normally assumes that noise sources are located at a free field. However, the sound in a reverberant field consists of that coming directly from the source plus sound reflected or scattered by the walls or objects in the field. In contrast to the exterior sound field, reflections are added to sound field. Therefore, the source location estimated by the conventional methods may give unacceptable error. In this paper, we explain the effects of reverberant field on interior source identification process and propose the method that can identify noise sources in the reverberant field.
Measurement of the WW + WZ production cross section using the lepton + jets final state at CDF II.
Aaltonen, T; Adelman, J; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Asaadi, J; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Barria, P; Bartos, P; Bauer, G; Beauchemin, P-H; Bedeschi, F; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Camarda, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carls, B; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Chung, K; Chung, W H; Chung, Y S; Chwalek, T; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Corbo, M; Cordelli, M; Cox, C A; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'Orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; d'Errico, M; Di Canto, A; di Giovanni, G P; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Dorigo, T; Dube, S; Ebina, K; Elagin, A; Erbacher, R; Errede, D; Errede, S; Ershaidat, N; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Frank, M J; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Garosi, P; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Group, R C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Happacher, F; Hara, K; Hare, D; Hare, M; Harr, R F; Hartz, M; Hatakeyama, K; Hays, C; Heck, M; Heinrich, J; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Hughes, R E; Hurwitz, M; Husemann, U; Hussein, M; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jang, D; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Ketchum, W; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, H W; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kuhr, T; Kulkarni, N P; Kurata, M; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, E; Lee, H S; Lee, J S; Lee, S W; Leone, S; Lewis, J D; Lin, C-J; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Lovas, L; Lucchesi, D; Lueck, J; Lujan, P; Lukens, P; Lungu, G; Lys, J; Lysak, R; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Mastrandrea, P; Mathis, M; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Mesropian, C; Miao, T; Mietlicki, D; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Mondragon, M N; Moon, C S; Moore, R; Morello, M J; Morlock, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Nett, J; Neu, C; Neubauer, M S; Neubauer, S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramanov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Peiffer, T; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Potamianos, K; Poukhov, O; Prokoshin, F; Pronko, A; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Renton, P; Renz, M; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Rutherford, B; Saarikko, H; Safonov, A; Sakumoto, W K; Santi, L; Sartori, L; Sato, K; Savoy-Navarro, A; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sforza, F; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Simonenko, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Strycker, G L; Suh, J S; Sukhanov, A; Suslov, I; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tang, J; Tecchio, M; Teng, P K; Thom, J; Thome, J; Thompson, G A; Thomson, E; Tipton, P; Ttito-Guzmán, P; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Trovato, M; Tsai, S-Y; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Vidal, M; Vila, I; Vilar, R; Vogel, M; Volobouev, I; Volpi, G; Wagner, P; Wagner, R G; Wagner, R L; Wagner, W; Wagner-Kuhr, J; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Weinelt, J; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Wilbur, S; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wolfe, H; Wright, T; Wu, X; Würthwein, F; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yi, K; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanetti, A; Zeng, Y; Zhang, X; Zheng, Y; Zucchelli, S
2010-03-12
We report two complementary measurements of the WW + WZ cross section in the final state consisting of an electron or muon, missing transverse energy, and jets, performed using pp collision data at square root of s = 1.96 TeV collected by the CDF II detector. The first method uses the dijet invariant mass distribution while the second more sensitive method uses matrix-element calculations. The result from the second method has a signal significance of 5.4sigma and is the first observation of WW + WZ production using this signature. Combining the results gives sigma(WW + WZ) = 16.0 +/- 3.3 pb, in agreement with the standard model prediction.
Landsat Image Map Production Methods at the U. S. Geological Survey
Kidwell, R.D.; Binnie, D.R.; Martin, S.
1987-01-01
To maintain consistently high quality in satellite image map production, the U. S. Geological Survey (USGS) has developed standard procedures for the photographic and digital production of Landsat image mosaics, and for lithographic printing of multispectral imagery. This paper gives a brief review of the photographic, digital, and lithographic procedures currently in use for producing image maps from Landsat data. It is shown that consistency in the printing of image maps is achieved by standardizing the materials and procedures that affect the image detail and color balance of the final product. Densitometric standards are established by printing control targets using the pressplates, inks, pre-press proofs, and paper to be used for printing.
Saito, Moemi; Nodate, Yoshitada; Maruyama, Keiji; Tsuchiya, Masao; Watanabe, Machiko; Niwa, Sin-ichi
2012-01-01
We established a practical training program to nurture pharmacists who can give smoking cessation instructions. The program was provided to 85 interns (45 males and 40 females) in Teikyo University Hospital. The one-day practical training was provided to groups comprised of five members each. The training consisted of studies on the adverse effects of smoking, general outlines of the outpatient smoking cessation service, experiencing Smokerlyzer, studies about smoking-cessation drugs, studies about a smoking cessation therapy using cognitive-behavioral therapy and motivational interviewing, and case studies applying role-playing. Before and after the practical training, we conducted a questionnaire survey consisting of The Kano Test for Social Nicotine Dependence (KTSND) and the assessment of the smoking status, changes in attitudes to smoking, and willingness and confidence to give smoking cessation instructions. The overall KTSND score significantly dropped from 14.1±4.8 before the training to 8.9±4.8 after the training. The confidence to give smoking cessation instructions significantly increased from 3.4±1.9 to 6.2±1.3. Regarding the correlation between the smoking status and willingness and confidence to give smoking cessation instructions, the willingness and confidence were lower among the group of interns who either smoked or had smoked previously, suggesting that smoking had an adverse effect. A total of 88.2% of the interns answered that their attitudes to smoking had "changed slightly" or "changed" as a result of the training, indicating changes in their attitudes to smoking. Given the above, we believe that our newly-established smoking cessation instruction training is a useful educational tool.
Beam profile measurements for target designators
NASA Astrophysics Data System (ADS)
Frank, J. D.
1985-02-01
An American aerospace company has conducted a number of investigations with the aim to improve on the tedious slow manual methods of measuring pulsed lasers for rangefinders, giving particular attention to beam divergence which is studied by varying aperture sizes and positions in the laser beam path. Three instruments have been developed to make the involved work easier to perform. One of these, the Automatic Laser Instrumentation and Measurement System (ALIMS), consists of an optical bench, a digital computer, and three bays of associated electronic instruments. ALIMS uses the aperture method to measure laser beam alignment and divergence. The Laser Intensity Profile System (LIPS) consists of a covered optical bench and a two bay electronic equipment and control console. The Automatic Laser Test Set (ALTS) utilizes a 50 x 50 silicon photodiode array to characterize military laser systems automatically. Details regarding the conducted determinations are discussed.
NASA Astrophysics Data System (ADS)
Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.
2018-01-01
Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.
Ashcraft, Adam; Fernández-Val, Iván; Lang, Kevin
2012-01-01
Miscarriage, even if biologically random, is not socially random. Willingness to abort reduces miscarriage risk. Because abortions are favorably selected among pregnant teens, those miscarrying are less favorably selected than those giving birth or aborting but more favorably selected than those giving birth. Therefore, using miscarriage as an instrument is biased towards a benign view of teen motherhood while OLS on just those giving birth or miscarrying has the opposite bias. We derive a consistent estimator that reduces to a weighted average of OLS and IV when outcomes are independent of abortion timing. Estimated effects are generally adverse but modest. PMID:24443589
Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach
Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei
2015-01-01
Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies. PMID:26705505
Towards Interactive Construction of Topical Hierarchy: A Recursive Tensor Decomposition Approach.
Wang, Chi; Liu, Xueqing; Song, Yanglei; Han, Jiawei
2015-08-01
Automatic construction of user-desired topical hierarchies over large volumes of text data is a highly desirable but challenging task. This study proposes to give users freedom to construct topical hierarchies via interactive operations such as expanding a branch and merging several branches. Existing hierarchical topic modeling techniques are inadequate for this purpose because (1) they cannot consistently preserve the topics when the hierarchy structure is modified; and (2) the slow inference prevents swift response to user requests. In this study, we propose a novel method, called STROD, that allows efficient and consistent modification of topic hierarchies, based on a recursive generative model and a scalable tensor decomposition inference algorithm with theoretical performance guarantee. Empirical evaluation shows that STROD reduces the runtime of construction by several orders of magnitude, while generating consistent and quality hierarchies.
Automated 3D Ultrasound Image Segmentation to Aid Breast Cancer Image Interpretation
Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Yuan, Jie; Wang, Xueding; Carson, Paul L.
2015-01-01
Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer. PMID:26547117
Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer
NASA Astrophysics Data System (ADS)
Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.
2016-04-01
Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.
Wellskins and slug tests: where's the bias?
NASA Astrophysics Data System (ADS)
Rovey, C. W.; Niemann, W. L.
2001-03-01
Pumping tests in an outwash sand at the Camp Dodge Site give hydraulic conductivities ( K) approximately seven times greater than conventional slug tests in the same wells. To determine if this difference is caused by skin bias, we slug tested three sets of wells, each in a progressively greater stage of development. Results were analyzed with both the conventional Bouwer-Rice method and the deconvolution method, which quantifies the skin and eliminates its effects. In 12 undeveloped wells the average skin is +4.0, causing underestimation of conventional slug-test K (Bouwer-Rice method) by approximately a factor of 2 relative to the deconvolution method. In seven nominally developed wells the skin averages just +0.34, and the Bouwer-Rice method gives K within 10% of that calculated with the deconvolution method. The Bouwer-Rice K in this group is also within 5% of that measured by natural-gradient tracer tests at the same site. In 12 intensely developed wells the average skin is <-0.82, consistent with an average skin of -1.7 measured during single-well pumping tests. At this site the maximum possible skin bias is much smaller than the difference between slug and pumping-test Ks. Moreover, the difference in K persists even in intensely developed wells with negative skins. Therefore, positive wellskins do not cause the difference in K between pumping and slug tests at this site.
Parallel consistent labeling algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samal, A.; Henderson, T.
Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less
Observational determination of the greenhouse effect
NASA Technical Reports Server (NTRS)
Raval, A.; Ramanathan, V.
1989-01-01
Satellite measurements are used to quantify the atmospheric greenhouse effect, defined here as the infrared radiation energy trapped by atmospheric gases and clouds. The greenhouse effect is found to increase significantly with sea surface temperature. The rate of increase gives compelling evidence for the positive feedback between surface temperature, water vapor and the greenhouse effect; the magnitude of the feedback is consistent with that predicted by climate models. This study demonstrates an effective method for directly monitoring, from space, future changes in the greenhouse effect.
NASA Astrophysics Data System (ADS)
Chen, De-You; Jiang, Qing-Quan; Yang, Shu-Zheng
2007-12-01
Applying Parikh’s semi-classical quantum tunneling method, the tunneling radiation characteristic of the charged particle from the event horizon of the Reissner Nordström anti de Sitter black hole is researched. The result shows the derived spectrum is not purely thermal one, but is consistent with the underlying unitary theory, which gives a might explanation to the information loss paradox and is the correct amendment to the Hawking radiation.
A set of devices for Mechanics Laboratory assisted by a Computer
NASA Astrophysics Data System (ADS)
Rusu, Alexandru; Pirtac, Constantin
2015-12-01
The booklet give a description of a set of devices designed for unified work out of a number of Laboratory works in Mechanics for students at Technical Universities. It consists of a clock, adjusted to a computer, which allows to compute times with an error not greater than 0.0001 s. It allows also to make the calculations of the physical quantities measured in the experience and present the compilation of the final report. The least square method is used throughout the workshop.
Reconstruction of Building Outlines in Dense Urban Areas Based on LIDAR Data and Address Points
NASA Astrophysics Data System (ADS)
Jarzabek-Rychard, M.
2012-07-01
The paper presents a comprehensive method for automated extraction and delineation of building outlines in densely built-up areas. A novel approach to outline reconstruction is the use of geocoded building address points. They give information about building location thus highly reduce task complexity. Reconstruction process is executed on 3D point clouds acquired by airborne laser scanner. The method consists of three steps: building detection, delineation and contours refinement. The algorithm is tested against a data set that presents the old market town and its surroundings. The results are discussed and evaluated by comparison to reference cadastral data.
Study of Y and Lu iron garnets using Bethe-Peierls-Weiss method
NASA Astrophysics Data System (ADS)
Goveas, Neena; Mukhopadhyay, G.; Mukhopadhyay, P.
1994-11-01
We study here the magnetic properties of Y- and Lu- Iron Garnets using the Bethe- Peierls-Weiss method modified to suit complex systems like these Garnets. We consider these Garnets as described by Heisenberg Hamiltonian with two sublattices (a,d) and determine the exchange interaction parameters Jad, Jaa and Jdd by matching the exerimental susceptibility curves. We find Jaa and Jdd to be much smaller than those determined by Néel theory, and consistent with those obtained by the study of spin wave spectra; the spin wave dispersion relation constant obtained using these parameters gives good agreement with the experimental values.
A unified Fourier theory for time-of-flight PET data
Li, Yusheng; Matej, Samuel; Metzler, Scott D
2016-01-01
Fully 3D time-of-flight (TOF) PET scanners offer the potential of previously unachievable image quality in clinical PET imaging. TOF measurements add another degree of redundancy for cylindrical PET scanners and make photon-limited TOF-PET imaging more robust than non-TOF PET imaging. The data space for 3D TOF-PET data is five-dimensional with two degrees of redundancy. Previously, consistency equations were used to characterize the redundancy of TOF-PET data. In this paper, we first derive two Fourier consistency equations and Fourier-John equation for 3D TOF PET based on the generalized projection-slice theorem; the three partial differential equations (PDEs) are the dual of the sinogram consistency equations and John's equation. We then solve the three PDEs using the method of characteristics. The two degrees of entangled redundancy of the TOF-PET data can be explicitly elicited and exploited by the solutions of the PDEs along the characteristic curves, which gives a complete understanding of the rich structure of the 3D X-ray transform with TOF measurement. Fourier rebinning equations and other mapping equations among different types of PET data are special cases of the general solutions. We also obtain new Fourier rebinning and consistency equations (FORCEs) from other special cases of the general solutions, and thus we obtain a complete scheme to convert among different types of PET data: 3D TOF, 3D non-TOF, 2D TOF and 2D non-TOF data. The new FORCEs can be used as new Fourier-based rebinning algorithms for TOF-PET data reduction, inverse rebinnings for designing fast projectors, or consistency conditions for estimating missing data. Further, we give a geometric interpretation of the general solutions—the two families of characteristic curves can be obtained by respectively changing the azimuthal and co-polar angles of the biorthogonal coordinates in Fourier space. We conclude the unified Fourier theory by showing that the Fourier consistency equations are necessary and sufficient for 3D X-ray transform with TOF measurement. Finally, we give numerical examples of inverse rebinning for a 3D TOF PET and Fourier-based rebinning for a 2D TOF PET using the FORCEs to show the efficacy of the unified Fourier solutions. PMID:26689836
A unified Fourier theory for time-of-flight PET data.
Li, Yusheng; Matej, Samuel; Metzler, Scott D
2016-01-21
Fully 3D time-of-flight (TOF) PET scanners offer the potential of previously unachievable image quality in clinical PET imaging. TOF measurements add another degree of redundancy for cylindrical PET scanners and make photon-limited TOF-PET imaging more robust than non-TOF PET imaging. The data space for 3D TOF-PET data is five-dimensional with two degrees of redundancy. Previously, consistency equations were used to characterize the redundancy of TOF-PET data. In this paper, we first derive two Fourier consistency equations and Fourier-John equation for 3D TOF PET based on the generalized projection-slice theorem; the three partial differential equations (PDEs) are the dual of the sinogram consistency equations and John's equation. We then solve the three PDEs using the method of characteristics. The two degrees of entangled redundancy of the TOF-PET data can be explicitly elicited and exploited by the solutions of the PDEs along the characteristic curves, which gives a complete understanding of the rich structure of the 3D x-ray transform with TOF measurement. Fourier rebinning equations and other mapping equations among different types of PET data are special cases of the general solutions. We also obtain new Fourier rebinning and consistency equations (FORCEs) from other special cases of the general solutions, and thus we obtain a complete scheme to convert among different types of PET data: 3D TOF, 3D non-TOF, 2D TOF and 2D non-TOF data. The new FORCEs can be used as new Fourier-based rebinning algorithms for TOF-PET data reduction, inverse rebinnings for designing fast projectors, or consistency conditions for estimating missing data. Further, we give a geometric interpretation of the general solutions--the two families of characteristic curves can be obtained by respectively changing the azimuthal and co-polar angles of the biorthogonal coordinates in Fourier space. We conclude the unified Fourier theory by showing that the Fourier consistency equations are necessary and sufficient for 3D x-ray transform with TOF measurement. Finally, we give numerical examples of inverse rebinning for a 3D TOF PET and Fourier-based rebinning for a 2D TOF PET using the FORCEs to show the efficacy of the unified Fourier solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeRosier, R.; Waterland, L.R.
1987-03-01
The report gives emission results from field tests of a wood-waste-fired industrial firetube boiler. Emission measurements included: continuous monitoring of flue gas emissions: source assessment sampling system (SASS) sampling of the flue-gas with subsequent laboratory analysis of samples to give total flue gas organics in two boiling point ranges, compound category information within these ranges, specific quantitation of the semivolatile organic priority pollutants, and flue gas concentrations of 65 trace elements; Method 5 sampling for particulates; controlled condensation system (CSS) sampling for SO/sub 2/ and SO/sub 3/; and grab sampling of boiler bottom ash for trace element content determinations. Totalmore » organic emissions from the boiler were 5.7 mg/dscm, about 90% of which consisted of volatile compounds.« less
Efficient generation of 3D hologram for American Sign Language using look-up table
NASA Astrophysics Data System (ADS)
Park, Joo-Sup; Kim, Seung-Cheol; Kim, Eun-Soo
2010-02-01
American Sign Language (ASL) is one of the languages giving the greatest help for communication of the hearing impaired person. Current 2-D broadcasting, 2-D movies are used the ASL to give some information, help understand the situation of the scene and translate the foreign language. These ASL will not be disappeared in future three-dimensional (3-D) broadcasting or 3-D movies because the usefulness of the ASL. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic ASL in holographic 3DTV or 3-D movies using look-up table method. The proposed method is largely consisted of five steps: construction of the LUT for each ASL images, extraction of characters in scripts or situation, call the fringe patterns for characters in the LUT for each ASL, composition of hologram pattern for 3-D video and hologram pattern for ASL and reconstruct the holographic 3D video with ASL. Some simulation results confirmed the feasibility of the proposed method in efficient generation of CGH patterns for ASL.
Rodriguez-Falces, Javier
2013-12-01
In electrophysiology studies, it is becoming increasingly common to explain experimental observations using both descriptive methods and quantitative approaches. However, some electrophysiological phenomena, such as the generation of extracellular potentials that results from the propagation of the excitation source along the muscle fiber, are difficult to describe and conceptualize. In addition, most traditional approaches aimed at describing extracellular potentials consist of complex mathematical machinery that gives no chance for physical interpretation. The aim of the present study is to present a new method to teach the formation of extracellular potentials around a muscle fiber from both a descriptive and quantitative perspective. The implementation of this method was tested through a written exam and a satisfaction survey. The new method enhanced the ability of students to visualize the generation of bioelectrical potentials. In addition, the new approach improved students' understanding of how changes in the fiber-to-electrode distance and in the shape of the excitation source are translated into changes in the extracellular potential. The survey results show that combining general principles of electrical fields with accurate graphic imagery gives students an intuitive, yet quantitative, feel for electrophysiological signals and enhances their motivation to continue their studies in the biomedical engineering field.
Combining volumetric edge display and multiview display for expression of natural 3D images
NASA Astrophysics Data System (ADS)
Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki
2006-02-01
In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.
Methods of identification employing antibody profiles
Francoeur, Ann-Michele
1993-12-14
An identification method, applicable to the identification of animals or inanimate objects, is described. The method takes advantage of the set of individual-specific antibodies that are part of the unique antibody repertoire present in animals, by reacting an effective amount of such antibodies with a particular panel, of n-dimensional array (where n is typically one or two) consisting of an effective amount of many different antigens (typically greater than one thousand), to give antibody-antigen complexes. The profile or pattern formed by the antigen-antibody complexes, termed an antibody fingerprint, when revealed by an effective amount of an appropriate detector molecule, is uniquely representative of a particular individual. The method can similarly be used to distinguish genetically, or otherwise similar individuals, or their body parts containing individual-specific antibodies.
Zhang, Guozhu; Xie, Changsheng; Zhang, Shunping; Zhao, Jianwei; Lei, Tao; Zeng, Dawen
2014-09-08
A combinatorial high-throughput temperature-programmed method to obtain the optimal operating temperature (OOT) of gas sensor materials is demonstrated here for the first time. A material library consisting of SnO2, ZnO, WO3, and In2O3 sensor films was fabricated by screen printing. Temperature-dependent conductivity curves were obtained by scanning this gas sensor library from 300 to 700 K in different atmospheres (dry air, formaldehyde, carbon monoxide, nitrogen dioxide, toluene and ammonia), giving the OOT of each sensor formulation as a function of the carrier and analyte gases. A comparative study of the temperature-programmed method and a conventional method showed good agreement in measured OOT.
Evaluation of aircraft crash hazard at Los Alamos National Laboratory facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Selvage, R.D.
This report selects a method for use in calculating the frequency of an aircraft crash occurring at selected facilities at the Los Alamos National Laboratory (the Laboratory). The Solomon method was chosen to determine these probabilities. Each variable in the Solomon method is defined and a value for each variable is selected for fourteen facilities at the Laboratory. These values and calculated probabilities are to be used in all safety analysis reports and hazards analyses for the facilities addressed in this report. This report also gives detailed directions to perform aircraft-crash frequency calculations for other facilities. This will ensure thatmore » future aircraft-crash frequency calculations are consistent with calculations in this report.« less
A time accurate finite volume high resolution scheme for three dimensional Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Hsu, Andrew T.
1989-01-01
A time accurate, three-dimensional, finite volume, high resolution scheme for solving the compressible full Navier-Stokes equations is presented. The present derivation is based on the upwind split formulas, specifically with the application of Roe's (1981) flux difference splitting. A high-order accurate (up to the third order) upwind interpolation formula for the inviscid terms is derived to account for nonuniform meshes. For the viscous terms, discretizations consistent with the finite volume concept are described. A variant of second-order time accurate method is proposed that utilizes identical procedures in both the predictor and corrector steps. Avoiding the definition of midpoint gives a consistent and easy procedure, in the framework of finite volume discretization, for treating viscous transport terms in the curvilinear coordinates. For the boundary cells, a new treatment is introduced that not only avoids the use of 'ghost cells' and the associated problems, but also satisfies the tangency conditions exactly and allows easy definition of viscous transport terms at the first interface next to the boundary cells. Numerical tests of steady and unsteady high speed flows show that the present scheme gives accurate solutions.
Lattice operators for scattering of particles with spin
Prelovsek, S.; Skerbis, U.; Lang, C. B.
2017-01-30
We construct operators for simulating the scattering of two hadrons with spin on the lattice. Three methods are shown to give the consistent operators for P N, P V, V N and N N scattering, where P, V and N denote pseudoscalar, vector and nucleon. Explicit expressions for operators are given for all irreducible representations at lowest two relative momenta. Each hadron has a good helicity in the first method. The hadrons are in a certain partial wave L with total spin S in the second method. These enable the physics interpretations of the operators obtained from the general projectionmore » method. The correct transformation properties of the operators in all three methods are proven. Lastly, the total momentum of two hadrons is restricted to zero since parity is a good quantum number in this case.« less
NASA Technical Reports Server (NTRS)
Waller, Jess M.; Saulsberry, Regor L.; Lucero, Ralph; Nichols, Charles T.; Wentzel, Daniel J.
2010-01-01
ASTM-based ILH methods were found to give a reproducible, quantitative estimate of the stress threshold at which significant accumulated damage began to occur. a) FR events are low energy (<2 V(exp 20 microsec) b) FR events occur close to the observed failure locus. c) FR events consist of more than 30% fiber breakage (>300 kHz) d) FR events show a consistent hierarchy of cooperative damage for composite tow, and for the COPV tested, regardless of applied load. Application of ILH or related stress profiles could lead to robust pass/fail acceptance criteria based on the FR. Initial application of FR and FFT analysis of AE data acquired on COPVs is promising.
Asymptotic analysis of stability for prismatic solids under axial loads
NASA Astrophysics Data System (ADS)
Scherzinger, W.; Triantafyllidis, N.
1998-06-01
This work addresses the stability of axially loaded prismatic beams with any simply connected crosssection. The solids obey a general class of rate-independent constitutive laws, and can sustain finite strains in either compression or tension. The proposed method is based on multiple scale asymptotic analysis, and starts with the full Lagrangian formulation for the three-dimensional stability problem, where the boundary conditions are chosen to avoid the formation of boundary layers. The calculations proceed by taking the limit of the beam's slenderness parameter, ɛ (ɛ 2 ≡ area/length 2), going to zero, thus resulting in asymptotic expressions for the critical loads and modes. The analysis presents a consistent and unified treatment for both compressive (buckling) and tensile (necking) instabilities, and is carried out explicitly up to o( ɛ4) in each case. The present method circumvents the standard structural mechanics approach for the stability problem of beams which requires the choice of displacement and stress field approximations in order to construct a nonlinear beam theory. Moreover, this work provides a consistent way to calculate the effect of the beam's slenderness on the critical load and mode to any order of accuracy required. In contrast, engineering theories give accurately the lowest order terms ( O( ɛ2)—Euler load—in compression or O(1)—maximum load—in tension) but give only approximately the next higher order terms, with the exception of simple section geometries where exact stability results are available. The proposed method is used to calculate the critical loads and eigenmodes for bars of several different cross-sections (circular, square, cruciform and L-shaped). Elastic beams are considered in compression and elastoplastic beams are considered in tension. The O( ɛ2) and O( ɛ4) asymptotic results are compared to the exact finite element calculations for the corresponding three-dimensional prismatic solids. The O( ɛ4) results give significant improvement over the O( ɛ2) results, even for extremely stubby beams, and in particular for the case of cross-sections with commensurate dimensions.
A Rapid Method Combining Golgi and Nissl Staining to Study Neuronal Morphology and Cytoarchitecture
Pilati, Nadia; Barker, Matthew; Panteleimonitis, Sofoklis; Donga, Revers; Hamann, Martine
2008-01-01
The Golgi silver impregnation technique gives detailed information on neuronal morphology of the few neurons it labels, whereas the majority remain unstained. In contrast, the Nissl staining technique allows for consistent labeling of the whole neuronal population but gives very limited information on neuronal morphology. Most studies characterizing neuronal cell types in the context of their distribution within the tissue slice tend to use the Golgi silver impregnation technique for neuronal morphology followed by deimpregnation as a prerequisite for showing that neuron's histological location by subsequent Nissl staining. Here, we describe a rapid method combining Golgi silver impregnation with cresyl violet staining that provides a useful and simple approach to combining cellular morphology with cytoarchitecture without the need for deimpregnating the tissue. Our method allowed us to identify neurons of the facial nucleus and the supratrigeminal nucleus, as well as assessing cellular distribution within layers of the dorsal cochlear nucleus. With this method, we also have been able to directly compare morphological characteristics of neuronal somata at the dorsal cochlear nucleus when labeled with cresyl violet with those obtained with the Golgi method, and we found that cresyl violet–labeled cell bodies appear smaller at high cellular densities. Our observation suggests that cresyl violet staining is inadequate to quantify differences in soma sizes. (J Histochem Cytochem 56:539–550, 2008) PMID:18285350
Automating annotation of information-giving for analysis of clinical conversation.
Mayfield, Elijah; Laws, M Barton; Wilson, Ira B; Penstein Rosé, Carolyn
2014-02-01
Coding of clinical communication for fine-grained features such as speech acts has produced a substantial literature. However, annotation by humans is laborious and expensive, limiting application of these methods. We aimed to show that through machine learning, computers could code certain categories of speech acts with sufficient reliability to make useful distinctions among clinical encounters. The data were transcripts of 415 routine outpatient visits of HIV patients which had previously been coded for speech acts using the Generalized Medical Interaction Analysis System (GMIAS); 50 had also been coded for larger scale features using the Comprehensive Analysis of the Structure of Encounters System (CASES). We aggregated selected speech acts into information-giving and requesting, then trained the machine to automatically annotate using logistic regression classification. We evaluated reliability by per-speech act accuracy. We used multiple regression to predict patient reports of communication quality from post-visit surveys using the patient and provider information-giving to information-requesting ratio (briefly, information-giving ratio) and patient gender. Automated coding produces moderate reliability with human coding (accuracy 71.2%, κ=0.57), with high correlation between machine and human prediction of the information-giving ratio (r=0.96). The regression significantly predicted four of five patient-reported measures of communication quality (r=0.263-0.344). The information-giving ratio is a useful and intuitive measure for predicting patient perception of provider-patient communication quality. These predictions can be made with automated annotation, which is a practical option for studying large collections of clinical encounters with objectivity, consistency, and low cost, providing greater opportunity for training and reflection for care providers.
Kernel-PCA data integration with enhanced interpretability
2014-01-01
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747
Hedlund, Ann; Ateg, Mattias; Andersson, Ing-Marie; Rosén, Gunnar
2010-04-01
Workers' motivation to actively take part in improvements to the work environment is assumed to be important for the efficiency of investments for that purpose. That gives rise to the need for a tool to measure this motivation. A questionnaire to measure motivation for improvements to the work environment has been designed. Internal consistency and test-retest reliability of the domains of the questionnaire have been measured, and the factorial structure has been explored, from the answers of 113 employees. The internal consistency is high (0.94), as well as the correlation for the total score (0.84). Three factors are identified accounting for 61.6% of the total variance. The questionnaire can be a useful tool in improving intervention methods. The expectation is that the tool can be useful, particularly with the aim of improving efficiency of companies' investments for work environment improvements. Copyright 2010 Elsevier Ltd. All rights reserved.
Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments
Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed
2013-01-01
In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135
The Three-Dimensional Expansion of the Ejecta from Tycho's Supernova Remnant
NASA Technical Reports Server (NTRS)
Williams, Brian J.; Coyle, Nina M.; Yamaguchi, Hiroya; Depasquale, Joseph; Seitenzahl, Ivo R.; Hewitt, John W.; Blondin, John M.; Borkowski, Kazimierz J.; Ghavamian, Parviz; Petre, Robert;
2017-01-01
We present the first 3D measurements of the velocity of various ejecta knots in Tycho's supernova remnant, known to result from a Type Ia explosion. Chandra X-ray observations over a 12 yr baseline from 2003 to 2015 allow us to measure the proper motion of nearly 60 tufts of Si-rich ejecta, giving us the velocity in the plane of the sky. For the line-of-sight velocity, we use two different methods: a nonequilibrium ionization model fit to the strong Si and S lines in the 1.22.8 keV regime, and a fit consisting of a series of Gaussian lines. These methods give consistent results, allowing us to determine the redshift or blueshift of each of the knots. Assuming a distance of 3.5 kpc, we find total velocities that range from 2400 to 6600 km/s, with a mean of 4430 km/s. We find several regions where the ejecta knots have overtaken the forward shock. These regions have proper motions in excess of 6000 km/s. Some SN Ia explosion models predict a velocity asymmetry in the ejecta. We find no such velocity asymmetries in Tycho, and we discuss our findings in light of various explosion models, favoring those delayed-detonation models with relatively vigorous and symmetrical deflagrations. Finally, we compare measurements with models of the remnant's evolution that include both smooth and clumpy ejecta profiles, finding that both ejecta profiles can be accommodated by the observations.
Newton-Krylov-Schwarz: An implicit solver for CFD
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Keyes, David E.; Venkatakrishnan, V.
1995-01-01
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton's method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on aerodynamics applications emphasizing comparisons with a standard defect-correction approach, subdomain preconditioner consistency, subdomain preconditioner quality, and the effect of a coarse grid.
Vibro-acoustic performance of newly designed tram track structures
NASA Astrophysics Data System (ADS)
Haladin, Ivo; Lakušić, Stjepan; Ahac, Maja
2017-09-01
Rail vehicles in interaction with a railway structure induce vibrations that are propagating to surrounding structures and cause noise disturbance in the surrounding areas. Since tram tracks in urban areas often share the running surface with road vehicles one of top priorities is to achieve low maintenance and long lasting structure. Research conducted in scope of this paper gives an overview of newly designed tram track structures designated for use on Zagreb tram network and their performance in terms of noise and vibration mitigation. Research has been conducted on a 150 m long test section consisted of three tram track types: standard tram track structure commonly used on tram lines in Zagreb, optimized tram structure for better noise and vibration mitigation and a slab track with double sleepers embedded in a concrete slab, which presents an entirely new approach of tram track construction in Zagreb. Track has been instrumented with acceleration sensors, strain gauges and revision shafts for inspection. Relative deformations give an insight into track structure dynamic load distribution through the exploitation period. Further the paper describes vibro-acoustic measurements conducted at the test site. To evaluate the track performance from the vibro-acoustical standpoint, detailed analysis of track decay rate has been analysed. Opposed to measurement technique using impact hammer for track decay rate measurements, newly developed measuring technique using vehicle pass by vibrations as a source of excitation has been proposed and analysed. Paper gives overview of the method, it’s benefits compared to standard method of track decay rate measurements and method evaluation based on noise measurements of the vehicle pass by.
Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies
NASA Astrophysics Data System (ADS)
Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira
2015-12-01
Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.
3D Visualization of Urban Area Using Lidar Technology and CityGML
NASA Astrophysics Data System (ADS)
Popovic, Dragana; Govedarica, Miro; Jovanovic, Dusan; Radulovic, Aleksandra; Simeunovic, Vlado
2017-12-01
3D models of urban areas have found use in modern world such as navigation, cartography, urban planning visualization, construction, tourism and even in new applications of mobile navigations. With the advancement of technology there are much better solutions for mapping earth’s surface and spatial objects. 3D city model enables exploration, analysis, management tasks and presentation of a city. Urban areas consist of terrain surfaces, buildings, vegetation and other parts of city infrastructure such as city furniture. Nowadays there are a lot of different methods for collecting, processing and publishing 3D models of area of interest. LIDAR technology is one of the most effective methods for collecting data due the large amount data that can be obtained with high density and geometrical accuracy. CityGML is open standard data model for storing alphanumeric and geometry attributes of city. There are 5 levels of display (LoD0, LoD1, LoD2, LoD3, LoD4). In this study, main aim is to represent part of urban area of Novi Sad using LIDAR technology, for data collecting, and different methods for extraction of information’s using CityGML as a standard for 3D representation. By using series of programs, it is possible to process collected data, transform it to CityGML and store it in spatial database. Final product is CityGML 3D model which can display textures and colours in order to give a better insight of the cities. This paper shows results of the first three levels of display. They consist of digital terrain model and buildings with differentiated rooftops and differentiated boundary surfaces. Complete model gives us a realistic view of 3D objects.
A Consistent Definition of Phase Resetting Using Hilbert Transform.
Oprisan, Sorinel A
2017-01-01
A phase resetting curve (PRC) measures the transient change in the phase of a neural oscillator subject to an external perturbation. The PRC encapsulates the dynamical response of a neural oscillator and, as a result, it is often used for predicting phase-locked modes in neural networks. While phase is a fundamental concept, it has multiple definitions that may lead to contradictory results. We used the Hilbert Transform (HT) to define the phase of the membrane potential oscillations and HT amplitude to estimate the PRC of a single neural oscillator. We found that HT's amplitude and its corresponding instantaneous frequency are very sensitive to membrane potential perturbations. We also found that the phase shift of HT amplitude between the pre- and poststimulus cycles gives an accurate estimate of the PRC. Moreover, HT phase does not suffer from the shortcomings of voltage threshold or isochrone methods and, as a result, gives accurate and reliable estimations of phase resetting.
Mechanical properties of hydrogenated bilayer graphene
NASA Astrophysics Data System (ADS)
Andrew, R. C.; Mapasha, R. E.; Chetty, N.
2013-06-01
Using first principle methods, we study the mechanical properties of monolayer and bilayer graphene with 50% and 100% coverage of hydrogen. We employ the vdW-DF, vdW-DF-C09x, and vdW-DF2-C09x van der Waals functionals for the exchange correlation interactions that give significantly improved interlayer spacings and energies. We also use the PBE form for the generalized gradient corrected exchange correlation functional for comparison. We present a consistent theoretical framework for the in-plane layer modulus and the out-of-plane interlayer modulus and we calculate, for the first time, these properties for these systems. This gives a measure of the change of the strength properties when monolayer and bilayer graphene are hydrogenated. Moreover, comparing the relative performance of these functionals in describing hydrogenated bilayered graphenes, we also benchmark these functionals in how they calculate the properties of graphite.
NASA Astrophysics Data System (ADS)
Rüdiger, Julian; Bobrowski, Nicole; Liotta, Marcello; Hoffmann, Thorsten
2017-04-01
Volcanoes are a potential large source of several reactive atmospheric trace gases including sulfur and halogen containing species. Besides the importance for atmospheric chemistry, the detailed knowledge of halogen chemistry in volcanic plumes can help to get insights into subsurface processes. In this study a gas diffusion denuder sampling method, using a 1,3,5-trimethoxybenzene (1,3,5-TMB) coating for the derivatization of reactive halogen species (RHS), was characterized by dilution chamber experiments. The coating proved to be suitable to collect selectively gaseous bromine species with oxidation states (OS) of +1 or 0 (such as Br2, BrCl, BrO(H) and BrONO2), while being ignorant to HBr (OS -1). The reaction of 1,3,5-TMB with reactive bromine species gives 1-bromo-2,4,6-trimethoxybenzene (1-bromo-2,4,6-TMB) - other halogens give corresponding products. Solvent elution of the derivatized analytes and subsequent analysis with gas chromatography mass spectrometry gives detection limits of 10 ng or less for Br2, Cl2, and I2. In 2015 the method was applied on volcanic gas plumes at Mt. Etna (Italy) giving reactive bromine mixing ratios from 0.8 ppbv to 7.0 ppbv. Total bromine mixing ratios of 4.7 ppbv to 27.5 ppbv were obtained by simultaneous alkaline trap sampling (by a Raschig-tube) followed by analysis with ion chromatography and inductively coupled plasma mass spectrometry. This leads to the first results of in-situ measured reactive bromine to total bromine ratios, spanning a range between 12±1 % and 36±2 %. Our finding is in an agreement with previous model studies, which imply values < 44 % for plume ages < 1 minute, which is consistent with the assumed plume age at the sampling sites.
Remote sensing of ocean currents
NASA Technical Reports Server (NTRS)
Goldstein, R. M.; Zebker, H. A.; Barnett, T. P.
1989-01-01
A method of remotely measuring near-surface ocean currents with a synthetic aperture radar (SAR) is described. The apparatus consists of a single SAR transmitter and two receiving antennas. The phase difference between SAR image scenes obtained from the antennas forms an interferogram that is directly proportional to the surface current. The first field test of this technique against conventional measurements gives estimates of mean currents accurate to order 20 percent, that is, root-mean-square errors of 5 to 10 centimeters per second in mean flows of 27 to 56 centimeters per second. If the full potential of the method could be realized with spacecraft, then it might be possible to routinely monitor the surface currents of the world's oceans.
The nature of air pollution and the methods available for measuring it
Ellison, J. McK.
1965-01-01
At present the principal sources of energy in Europe are coal and oil and fuels derived from them, and in European towns air pollution consists mainly of their combustion products. These combustion products naturally divide into two categories, gaseous and particulate, which are very different chemically and which behave very differently when they are near collecting surfaces; they therefore require very different techniques both for collecting and for estimating samples. Some methods of measurement, suitable for everyday routine use in Europe, are described; these offer a compromise between completeness and economy, and can help to give a general outline of the air pollution situation without undue complexity or prohibitive cost. PMID:14315712
Galilean invariant resummation schemes of cosmological perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peloso, Marco; Pietroni, Massimo, E-mail: peloso@physics.umn.edu, E-mail: massimo.pietroni@unipr.it
2017-01-01
Many of the methods proposed so far to go beyond Standard Perturbation Theory break invariance under time-dependent boosts (denoted here as extended Galilean Invariance, or GI). This gives rise to spurious large scale effects which spoil the small scale predictions of these approximation schemes. By using consistency relations we derive fully non-perturbative constraints that GI imposes on correlation functions. We then introduce a method to quantify the amount of GI breaking of a given scheme, and to correct it by properly tailored counterterms. Finally, we formulate resummation schemes which are manifestly GI, discuss their general features, and implement them inmore » the so called Time-Flow, or TRG, equations.« less
Functional Wigner representation of quantum dynamics of Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Opanchuk, B.; Drummond, P. D.
2013-04-01
We develop a method of simulating the full quantum field dynamics of multi-mode multi-component Bose-Einstein condensates in a trap. We use the truncated Wigner representation to obtain a probabilistic theory that can be sampled. This method produces c-number stochastic equations which may be solved using conventional stochastic methods. The technique is valid for large mode occupation numbers. We give a detailed derivation of methods of functional Wigner representation appropriate for quantum fields. Our approach describes spatial evolution of spinor components and properly accounts for nonlinear losses. Such techniques are applicable to calculating the leading quantum corrections, including effects such as quantum squeezing, entanglement, EPR correlations, and interactions with engineered nonlinear reservoirs. By using a consistent expansion in the inverse density, we are able to explain an inconsistency in the nonlinear loss equations found by earlier authors.
Ranking of Prokaryotic Genomes Based on Maximization of Sortedness of Gene Lengths
Bolshoy, A; Salih, B; Cohen, I; Tatarinova, T
2014-01-01
How variations of gene lengths (some genes become longer than their predecessors, while other genes become shorter and the sizes of these factions are randomly different from organism to organism) depend on organismal evolution and adaptation is still an open question. We propose to rank the genomes according to lengths of their genes, and then find association between the genome rank and variousproperties, such as growth temperature, nucleotide composition, and pathogenicity. This approach reveals evolutionary driving factors. The main purpose of this study is to test effectiveness and robustness of several ranking methods. The selected method of evaluation is measuring of overall sortedness of the data. We have demonstrated that all considered methods give consistent results and Bubble Sort and Simulated Annealing achieve the highest sortedness. Also, Bubble Sort is considerably faster than the Simulated Annealing method. PMID:26146586
Ranking of Prokaryotic Genomes Based on Maximization of Sortedness of Gene Lengths.
Bolshoy, A; Salih, B; Cohen, I; Tatarinova, T
How variations of gene lengths (some genes become longer than their predecessors, while other genes become shorter and the sizes of these factions are randomly different from organism to organism) depend on organismal evolution and adaptation is still an open question. We propose to rank the genomes according to lengths of their genes, and then find association between the genome rank and variousproperties, such as growth temperature, nucleotide composition, and pathogenicity. This approach reveals evolutionary driving factors. The main purpose of this study is to test effectiveness and robustness of several ranking methods. The selected method of evaluation is measuring of overall sortedness of the data. We have demonstrated that all considered methods give consistent results and Bubble Sort and Simulated Annealing achieve the highest sortedness. Also, Bubble Sort is considerably faster than the Simulated Annealing method.
On the Reliability of Individual Brain Activity Networks.
Cassidy, Ben; Bowman, F DuBois; Rae, Caroline; Solo, Victor
2018-02-01
There is intense interest in fMRI research on whole-brain functional connectivity, and however, two fundamental issues are still unresolved: the impact of spatiotemporal data resolution (spatial parcellation and temporal sampling) and the impact of the network construction method on the reliability of functional brain networks. In particular, the impact of spatiotemporal data resolution on the resulting connectivity findings has not been sufficiently investigated. In fact, a number of studies have already observed that functional networks often give different conclusions across different parcellation scales. If the interpretations from functional networks are inconsistent across spatiotemporal scales, then the whole validity of the functional network paradigm is called into question. This paper investigates the consistency of resting state network structure when using different temporal sampling or spatial parcellation, or different methods for constructing the networks. To pursue this, we develop a novel network comparison framework based on persistent homology from a topological data analysis. We use the new network comparison tools to characterize the spatial and temporal scales under which consistent functional networks can be constructed. The methods are illustrated on Human Connectome Project data, showing that the DISCOH 2 network construction method outperforms other approaches at most data spatiotemporal resolutions.
Multiple rotation assessment through isothetic fringes in speckle photography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angel, Luciano; Tebaldi, Myrian; Bolognini, Nestor
2007-05-10
The use of different pupils for storing each speckled image in speckle photography is employed to determine multiple in-plane rotations. The method consists of recording a four-exposure specklegram where the rotations are done between exposures. This specklegram is then optically processed in a whole field approach rendering isothetic fringes, which give detailed information about the multiple rotations. It is experimentally demonstrated that the proposed arrangement permits the depiction of six isothetics in order to measure either six different angles or three nonparallel components for two local general in-plane displacements.
Chirality sensing with stereodynamic biphenolate zinc complexes.
Bentley, Keith W; de Los Santos, Zeus A; Weiss, Mary J; Wolf, Christian
2015-10-01
Two bidentate ligands consisting of a fluxional polyarylacetylene framework with terminal phenol groups were synthesized. Reaction with diethylzinc gives stereodynamic complexes that undergo distinct asymmetric transformation of the first kind upon binding of chiral amines and amino alcohols. The substrate-to-ligand chirality imprinting at the zinc coordination sphere results in characteristic circular dichroism signals that can be used for direct enantiomeric excess (ee) analysis. This chemosensing approach bears potential for high-throughput ee screening with small sample amounts and reduced solvent waste compared to traditional high-performance liquid chromatography methods. © 2015 Wiley Periodicals, Inc.
Darboux coordinates and instanton corrections in projective superspace
NASA Astrophysics Data System (ADS)
Crichigno, P. Marcos; Jain, Dharmesh
2012-10-01
By demanding consistency of the Legendre transform construction of hyperkähler metrics in projective superspace, we derive the expression for the Darboux coordinates on the hyperkähler manifold. We apply these results to study the Coulomb branch moduli space of 4D, {N}=2 super-Yang-Mills theory (SYM) on {{{R}}^3}× {S^1} , recovering the results by GMN. We also apply this method to study the electric corrections to the moduli space of 5D, {N}=1 SYM on {{{R}}^3}× {T^2} and give the Darboux coordinates explicitly.
Statistical analysis of loopy belief propagation in random fields
NASA Astrophysics Data System (ADS)
Yasuda, Muneki; Kataoka, Shun; Tanaka, Kazuyuki
2015-10-01
Loopy belief propagation (LBP), which is equivalent to the Bethe approximation in statistical mechanics, is a message-passing-type inference method that is widely used to analyze systems based on Markov random fields (MRFs). In this paper, we propose a message-passing-type method to analytically evaluate the quenched average of LBP in random fields by using the replica cluster variation method. The proposed analytical method is applicable to general pairwise MRFs with random fields whose distributions differ from each other and can give the quenched averages of the Bethe free energies over random fields, which are consistent with numerical results. The order of its computational cost is equivalent to that of standard LBP. In the latter part of this paper, we describe the application of the proposed method to Bayesian image restoration, in which we observed that our theoretical results are in good agreement with the numerical results for natural images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-05-01
The Transient Reactor Analysis Code (TRAC) is being developed at the Los Alamos Scientific Laboratory (LASL) to provide an advanced ''best estimate'' predictive capability for the analysis of postulated accidents in light water reactors (LWRs). TRAC-Pl provides this analysis capability for pressurized water reactors (PWRs) and for a wide variety of thermal-hydraulic experimental facilities. It features a three-dimensional treatment of the pressure vessel and associated internals; two-phase nonequilibrium hydrodynamics models; flow-regime-dependent constitutive equation treatment; reflood tracking capability for both bottom flood and falling film quench fronts; and consistent treatment of entire accident sequences including the generation of consistent initial conditions.more » The TRAC-Pl User's Manual is composed of two separate volumes. Volume I gives a description of the thermal-hydraulic models and numerical solution methods used in the code. Detailed programming and user information is also provided. Volume II presents the results of the developmental verification calculations.« less
Matsunaga, Yasuhiro
2018-01-01
Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. PMID:29723137
Matsunaga, Yasuhiro; Sugita, Yuji
2018-05-03
Single-molecule experiments and molecular dynamics (MD) simulations are indispensable tools for investigating protein conformational dynamics. The former provide time-series data, such as donor-acceptor distances, whereas the latter give atomistic information, although this information is often biased by model parameters. Here, we devise a machine-learning method to combine the complementary information from the two approaches and construct a consistent model of conformational dynamics. It is applied to the folding dynamics of the formin-binding protein WW domain. MD simulations over 400 μs led to an initial Markov state model (MSM), which was then "refined" using single-molecule Förster resonance energy transfer (FRET) data through hidden Markov modeling. The refined or data-assimilated MSM reproduces the FRET data and features hairpin one in the transition-state ensemble, consistent with mutation experiments. The folding pathway in the data-assimilated MSM suggests interplay between hydrophobic contacts and turn formation. Our method provides a general framework for investigating conformational transitions in other proteins. © 2018, Matsunaga et al.
Interpreting findings from Mendelian randomization using the MR-Egger method.
Burgess, Stephen; Thompson, Simon G
2017-05-01
Mendelian randomization-Egger (MR-Egger) is an analysis method for Mendelian randomization using summarized genetic data. MR-Egger consists of three parts: (1) a test for directional pleiotropy, (2) a test for a causal effect, and (3) an estimate of the causal effect. While conventional analysis methods for Mendelian randomization assume that all genetic variants satisfy the instrumental variable assumptions, the MR-Egger method is able to assess whether genetic variants have pleiotropic effects on the outcome that differ on average from zero (directional pleiotropy), as well as to provide a consistent estimate of the causal effect, under a weaker assumption-the InSIDE (INstrument Strength Independent of Direct Effect) assumption. In this paper, we provide a critical assessment of the MR-Egger method with regard to its implementation and interpretation. While the MR-Egger method is a worthwhile sensitivity analysis for detecting violations of the instrumental variable assumptions, there are several reasons why causal estimates from the MR-Egger method may be biased and have inflated Type 1 error rates in practice, including violations of the InSIDE assumption and the influence of outlying variants. The issues raised in this paper have potentially serious consequences for causal inferences from the MR-Egger approach. We give examples of scenarios in which the estimates from conventional Mendelian randomization methods and MR-Egger differ, and discuss how to interpret findings in such cases.
NASA Technical Reports Server (NTRS)
Wallis, Graham B.
1989-01-01
Some features of two recent approaches of two-phase potential flow are presented. The first approach is based on a set of progressive examples that can be analyzed using common techniques, such as conservation laws, and taken together appear to lead in the direction of a general theory. The second approach is based on variational methods, a classical approach to conservative mechanical systems that has a respectable history of application to single phase flows. This latter approach, exemplified by several recent papers by Geurst, appears generally to be consistent with the former approach, at least in those cases for which it is possible to obtain comparable results. Each approach has a justifiable theoretical base and is self-consistent. Moreover, both approaches appear to give the right prediction for several well-defined situations.
Olsson, Martin A; Söderhjelm, Pär; Ryde, Ulf
2016-06-30
In this article, the convergence of quantum mechanical (QM) free-energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa-acid deep-cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158-224 atoms). We use single-step exponential averaging (ssEA) and the non-Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi-empirical PM6-DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free-energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Olsson, Martin A.; Söderhjelm, Pär
2016-01-01
In this article, the convergence of quantum mechanical (QM) free‐energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa‐acid deep‐cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158–224 atoms). We use single‐step exponential averaging (ssEA) and the non‐Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi‐empirical PM6‐DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free‐energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:27117350
Enhancing the control of force in putting by video game training.
Fery, Y A; Ponserre, S
2001-10-10
Even if golf video games provide no proprioceptive afferences on actual putting movement, they may give sufficient substitutive visual cues to enhance force control in this skill. It was hypothesized that this usefulness requires, however, two conditions: the video game must provide reliable demonstrations of actual putts, and the user must want to use the game to make progress in actual putting. Accordingly, a video game was selected on the basis of its fidelity to the real-world game. It allowed two different methods of adjusting the virtual player's putting force in order to hole a putt: an analogue method that consisted of focusing on the virtual player's movement and a symbolic method that consisted of focusing on the movement of a gauge on a scale representing the virtual player's putting force. The participants had to use one of these methods with either the intention of making progress in actual putting or in a second condition to simply enjoy the game. Results showed a positive transfer of video playing to actual putting skill for the learning group and also, to a lesser degree, for the enjoyment group; but only when they used the symbolic method. Results are discussed in the context of how vision may convey force cues in sports video games.
An Evaluation of Empathic Listening in Telephone Counseling
ERIC Educational Resources Information Center
Libow, Judith A.; Doty, David W.
1976-01-01
Two counseling-analogue studies compared empathic-listening and active advice-giving styles of telephone counseling with college undergraduate participants. Results consistently indicated significant participant preference for active advice giving on overall call evaluation and on the two major factors (Helpfulness of Call and Helper Likability)…
Prosocial behavior leads to happiness in a small-scale rural society.
Aknin, Lara B; Broesch, Tanya; Hamlin, J Kiley; Van de Vondervoort, Julia W
2015-08-01
Humans are extraordinarily prosocial, and research conducted primarily in North America indicates that giving to others is emotionally rewarding. To examine whether the hedonic benefits of giving represent a universal feature of human behavior, we extended upon previous cross-cultural examinations by investigating whether inhabitants of a small-scale, rural, and isolated village in Vanuatu, where villagers have little influence from urban, Western culture, survive on subsistence farming without electricity, and have minimal formal education, report or display emotional rewards from engaging in prosocial (vs. personally beneficial) behavior. In Study 1, adults were randomly assigned to purchase candy for either themselves or others and then reported their positive affect. Consistent with previous research, adults purchasing goods for others reported greater positive emotion than adults receiving resources for themselves. In Study 2, 2- to 5-year-old children received candy and were subsequently asked to engage in costly giving (sharing their own candy with a puppet) and non-costly giving (sharing the experimenter's candy with a puppet). Emotional expressions were video-recorded during the experiment and later coded for happiness. Consistent with previous research conducted in Canada, children displayed more happiness when giving treats away than when receiving treats themselves. Moreover, the emotional rewards of giving were largest when children engaged in costly (vs. non-costly) giving. Taken together, these findings indicate that the emotional rewards of giving are detectable in people living in diverse societies and support the possibility that the hedonic benefits of generosity are universal. (c) 2015 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Duncan, Comer; Jones, Jim
1993-01-01
A key ingredient in the simulation of self-gravitating astrophysical fluid dynamical systems is the gravitational potential and its gradient. This paper focuses on the development of a mixed method multigrid solver of the Poisson equation formulated so that both the potential and the Cartesian components of its gradient are self-consistently and accurately generated. The method achieves this goal by formulating the problem as a system of four equations for the gravitational potential and the three Cartesian components of the gradient and solves them using a distributed relaxation technique combined with conventional full multigrid V-cycles. The method is described, some tests are presented, and the accuracy of the method is assessed. We also describe how the method has been incorporated into our three-dimensional hydrodynamics code and give an example of an application to the collision of two stars. We end with some remarks about the future developments of the method and some of the applications in which it will be used in astrophysics.
NASA Astrophysics Data System (ADS)
Isobe, Tomoharu; Kuwahara, Riichi; Ohno, Kaoru
2018-06-01
The one-shot G W method, beginning with the local density approximation (LDA), enables one to calculate photoemission and inverse photoemission spectra. In order to calculate photoabsorption spectra, one had to additionally solve the Bethe-Salpeter equation (BSE) for the two-particle (electron-hole) Green's function, which doubly induces evaluation errors. It has been recently reported that the G W +BSE method significantly underestimates the experimental photoabsorption energies (PAEs) of small molecules. In order to avoid these problems, we propose to apply the G W (Γ ) method not to the neutral ground state but to the cationic state to calculate PAEs without solving the BSE, which allows a rigorous one-to-one correspondence between the photoabsorption peak and the "extended" quasiparticle level. We applied the self-consistent linearized G W Γ method including the vertex correction Γ to our method, and found that this method gives the PAEs of B, Na3, and Li3 to within 0.1 eV accuracy.
NASA Astrophysics Data System (ADS)
da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham
2017-06-01
The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.
Consistency and Enhancement Processes in Understanding Emotions
ERIC Educational Resources Information Center
Stets, Jan E.; Asencio, Emily K.
2008-01-01
Many theories in the sociology of emotions assume that emotions emerge from the cognitive consistency principle. Congruence among cognitions produces good feelings whereas incongruence produces bad feelings. A work situation is simulated in which managers give feedback to workers that is consistent or inconsistent with what the workers expect to…
Functional Wigner representation of quantum dynamics of Bose-Einstein condensate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opanchuk, B.; Drummond, P. D.
2013-04-15
We develop a method of simulating the full quantum field dynamics of multi-mode multi-component Bose-Einstein condensates in a trap. We use the truncated Wigner representation to obtain a probabilistic theory that can be sampled. This method produces c-number stochastic equations which may be solved using conventional stochastic methods. The technique is valid for large mode occupation numbers. We give a detailed derivation of methods of functional Wigner representation appropriate for quantum fields. Our approach describes spatial evolution of spinor components and properly accounts for nonlinear losses. Such techniques are applicable to calculating the leading quantum corrections, including effects such asmore » quantum squeezing, entanglement, EPR correlations, and interactions with engineered nonlinear reservoirs. By using a consistent expansion in the inverse density, we are able to explain an inconsistency in the nonlinear loss equations found by earlier authors.« less
The response analysis of fractional-order stochastic system via generalized cell mapping method.
Wang, Liang; Xue, Lili; Sun, Chunyan; Yue, Xiaole; Xu, Wei
2018-01-01
This paper is concerned with the response of a fractional-order stochastic system. The short memory principle is introduced to ensure that the response of the system is a Markov process. The generalized cell mapping method is applied to display the global dynamics of the noise-free system, such as attractors, basins of attraction, basin boundary, saddle, and invariant manifolds. The stochastic generalized cell mapping method is employed to obtain the evolutionary process of probability density functions of the response. The fractional-order ϕ 6 oscillator and the fractional-order smooth and discontinuous oscillator are taken as examples to give the implementations of our strategies. Studies have shown that the evolutionary direction of the probability density function of the fractional-order stochastic system is consistent with the unstable manifold. The effectiveness of the method is confirmed using Monte Carlo results.
Space-time interface-tracking with topology change (ST-TC)
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Buscher, Austin; Asada, Shohei
2014-10-01
To address the computational challenges associated with contact between moving interfaces, such as those in cardiovascular fluid-structure interaction (FSI), parachute FSI, and flapping-wing aerodynamics, we introduce a space-time (ST) interface-tracking method that can deal with topology change (TC). In cardiovascular FSI, our primary target is heart valves. The method is a new version of the deforming-spatial-domain/stabilized space-time (DSD/SST) method, and we call it ST-TC. It includes a master-slave system that maintains the connectivity of the "parent" mesh when there is contact between the moving interfaces. It is an efficient, practical alternative to using unstructured ST meshes, but without giving up on the accurate representation of the interface or consistent representation of the interface motion. We explain the method with conceptual examples and present 2D test computations with models representative of the classes of problems we are targeting.
SPRAI: coupling of radiative feedback and primordial chemistry in moving mesh hydrodynamics
NASA Astrophysics Data System (ADS)
Jaura, O.; Glover, S. C. O.; Klessen, R. S.; Paardekooper, J.-P.
2018-04-01
In this paper, we introduce a new radiative transfer code SPRAI (Simplex Photon Radiation in the Arepo Implementation) based on the SIMPLEX radiation transfer method. This method, originally used only for post-processing, is now directly integrated into the AREPO code and takes advantage of its adaptive unstructured mesh. Radiated photons are transferred from the sources through the series of Voronoi gas cells within a specific solid angle. From the photon attenuation, we derive corresponding photon fluxes and ionization rates and feed them to a primordial chemistry module. This gives us a self-consistent method for studying dynamical and chemical processes caused by ionizing sources in primordial gas. Since the computational cost of the SIMPLEX method does not scale directly with the number of sources, it is convenient for studying systems such as primordial star-forming haloes that may form multiple ionizing sources.
NASA Astrophysics Data System (ADS)
Ginting, Nurlisa; Febriandy
2018-03-01
Toba’s Caldera is considered as a unique tourist destination as it was formed by the vulcanic eruption of Toba’s volcano, with Parbaba Village as its attraction. Geotourism, which consist of the administrator, education, uniqueness, accessibility, supporting facilities, and the increase of local people’s economy as it’s elements, is one of the concepts which can be implemented in this case. The objective of this research is to find a solution to increase natural tourist attraction in Parbaba village by making a tourist area development recommendation based on geotourism elements above. This research uses mix method, as the qualitative data will be collected by observation and interview with stakeholders, and the quantitative data will be collected by giving out 100 questionnaires to tourists and local people.The data then will be analyzed by using triangulation method. The result of this research is a concept of tourist attraction development recommendation. This research is expected to give benefits in the form of education and travel experience for tourist and also increases the economy of local people as a developer. The uniqueness element of this village is quite strong, whereas the supporting facilities are still lacking.
Multiscale spatial and temporal estimation of the b-value
NASA Astrophysics Data System (ADS)
García-Hernández, R.; D'Auria, L.; Barrancos, J.; Padilla, G.
2017-12-01
The estimation of the spatial and temporal variations of the Gutenberg-Richter b-value is of great importance in different seismological applications. One of the problems affecting its estimation is the heterogeneous distribution of the seismicity which makes its estimate strongly dependent upon the selected spatial and/or temporal scale. This is especially important in volcanoes where dense clusters of earthquakes often overlap the background seismicity. Proposed solutions for estimating temporal variations of the b-value include considering equally spaced time intervals or variable intervals having an equal number of earthquakes. Similar approaches have been proposed to image the spatial variations of this parameter as well.We propose a novel multiscale approach, based on the method of Ogata and Katsura (1993), allowing a consistent estimation of the b-value regardless of the considered spatial and/or temporal scales. Our method, named MUST-B (MUltiscale Spatial and Temporal characterization of the B-value), basically consists in computing estimates of the b-value at multiple temporal and spatial scales, extracting for a give spatio-temporal point a statistical estimator of the value, as well as and indication of the characteristic spatio-temporal scale. This approach includes also a consistent estimation of the completeness magnitude (Mc) and of the uncertainties over both b and Mc.We applied this method to example datasets for volcanic (Tenerife, El Hierro) and tectonic areas (Central Italy) as well as an example application at global scale.
Code of Federal Regulations, 2012 CFR
2012-07-01
... individuals who are members of special populations. Examples: Methods by which an eligible recipient may give... special populations include, but are not limited to, the following: Example 1: Method to give priority to...: Method to give priority to a limited number of program areas. Based on data from the preceding fiscal...
Code of Federal Regulations, 2014 CFR
2014-07-01
... individuals who are members of special populations. Examples: Methods by which an eligible recipient may give... special populations include, but are not limited to, the following: Example 1: Method to give priority to...: Method to give priority to a limited number of program areas. Based on data from the preceding fiscal...
Code of Federal Regulations, 2013 CFR
2013-07-01
... individuals who are members of special populations. Examples: Methods by which an eligible recipient may give... special populations include, but are not limited to, the following: Example 1: Method to give priority to...: Method to give priority to a limited number of program areas. Based on data from the preceding fiscal...
NASA Astrophysics Data System (ADS)
Grappone, J. M., Jr.; Biggin, A. J.; Barrett, T. J.; Hill, M. J.
2017-12-01
Deep in the Earth, thermodynamic behavior drives the geodynamo and creates the Earth's magnetic field. Determining how the strength of the field, its paleointensity (PI), varies with time, is vital to our understanding of Earth's evolution. Thellier-style paleointensity experiments assume the presence of non-interacting, single domain (SD) magnetic particles, which follow Thellier's laws. Most natural rocks however, contain larger, multi-domain (MD) or interacting single domain (ISD) particles that often violate these laws and cause experiments to fail. Even for samples that pass reliability criteria designed to minimize the impact of MD or ISD grains, different PI techniques can give systematically different estimates, implying violation of Thellier's laws. Our goal is to identify any disparities in PI results that may be explainable by protocol-specific MD and ISD behavior and determine optimum methods to maximize accuracy. Volcanic samples from the Hawai'ian SOH1 borehole previously produced method-dependent PI estimates. Previous studies showed consistently lower PI values when using a microwave (MW) system and the perpendicular method than using the original thermal Thellier-Thellier (OT) technique. However, the data were ambiguous regarding the cause of the discrepancy. The diverging estimates appeared to be either the result of using OT instead of the perpendicular method or the result of using MW protocols instead of thermal protocols. Comparison experiments were conducted using the thermal perpendicular method and microwave OT technique to bridge the gap. Preliminary data generally show that the perpendicular method gives lower estimates than OT for comparable Hlab values. MW estimates are also generally lower than thermal estimates using the same protocol.
Properties of a Formal Method for Prediction of Emergent Behaviors in Swarm-based Systems
NASA Technical Reports Server (NTRS)
Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James
2004-01-01
Autonomous intelligent swarms of satellites are being proposed for NASA missions that have complex behaviors and interactions. The emergent properties of swarms make these missions powerful, but at the same time more difficult to design and assure that proper behaviors will emerge. This paper gives the results of research into formal methods techniques for verification and validation of NASA swarm-based missions. Multiple formal methods were evaluated to determine their effectiveness in modeling and assuring the behavior of swarms of spacecraft. The NASA ANTS mission was used as an example of swarm intelligence for which to apply the formal methods. This paper will give the evaluation of these formal methods and give partial specifications of the ANTS mission using four selected methods. We then give an evaluation of the methods and the needed properties of a formal method for effective specification and prediction of emergent behavior in swarm-based systems.
NASA Astrophysics Data System (ADS)
Phillips, Jordan J.; Zgid, Dominika
2014-06-01
We report an implementation of self-consistent Green's function many-body theory within a second-order approximation (GF2) for application with molecular systems. This is done by iterative solution of the Dyson equation expressed in matrix form in an atomic orbital basis, where the Green's function and self-energy are built on the imaginary frequency and imaginary time domain, respectively, and fast Fourier transform is used to efficiently transform these quantities as needed. We apply this method to several archetypical examples of strong correlation, such as a H32 finite lattice that displays a highly multireference electronic ground state even at equilibrium lattice spacing. In all cases, GF2 gives a physically meaningful description of the metal to insulator transition in these systems, without resorting to spin-symmetry breaking. Our results show that self-consistent Green's function many-body theory offers a viable route to describing strong correlations while remaining within a computationally tractable single-particle formalism.
Self-consistent theory of nanodomain formation on non-polar surfaces of ferroelectrics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozovska, Anna N.; Obukhovskii, Vyacheslav; Fomichov, Evhen
2016-04-28
We propose a self-consistent theoretical approach capable of describing the features of the anisotropic nanodomain formation induced by a strongly inhomogeneous electric field of a charged scanning probe microscopy tip on nonpolar cuts of ferroelectrics. We obtained that a threshold field, previously regarded as an isotropic parameter, is an anisotropic function that is specified from the polar properties and lattice pinning anisotropy of a given ferroelectric in a self-consistent way. The proposed method for the calculation of the anisotropic threshold field is not material specific, thus the field should be anisotropic in all ferroelectrics with the spontaneous polarization anisotropy alongmore » the main crystallographic directions. The most evident examples are uniaxial ferroelectrics, layered ferroelectric perovskites, and low-symmetry incommensurate ferroelectrics. Obtained results quantitatively describe the differences at several times in the nanodomain length experimentally observed on X and Y cuts of LiNbO3 and can give insight into the anisotropic dynamics of nanoscale polarization reversal in strongly inhomogeneous electric fields.« less
Rishikesh, N.; Quélennec, G.
1983-01-01
Vector resistance and other constraints have necessitated consideration of the use of alternative materials and methods in an integrated approach to vector control. Bacillus thuringiensis serotype H-14 is a promising biological control agent which acts as a conventional larvicide through its delta-endotoxin (active ingredient) and which now has to be suitably formulated for application in vector breeding habitats. The active ingredient in the formulations has so far not been chemically characterized or quantified and therefore recourse has to be taken to a bioassay method. Drawing on past experience and through the assistance mainly of various collaborating centres, the World Health Organization has standardized a bioassay method (described in the Annex), which gives consistent and reproducible results. The method permits the determination of the potency of a B.t. H-14 preparation through comparison with a standard powder. The universal adoption of the standardized bioassay method will ensure comparability of the results of different investigators. PMID:6601545
NASA Technical Reports Server (NTRS)
Smith, C. W.; Bhateley, I. C.
1976-01-01
Two techniques for extending the range of applicability of the basic vortex-lattice method are discussed. The first improves the computation of aerodynamic forces on thin, low-aspect-ratio wings of arbitrary planforms at subsonic Mach numbers by including the effects of leading-edge and tip vortex separation, characteristic of this type wing, through use of the well-known suction-analogy method of E. C. Polhamus. Comparisons with experimental data for a variety of planforms are presented. The second consists of the use of the vortex-lattice method to predict pressure distributions over thick multi-element wings (wings with leading- and trailing-edge devices). A method of laying out the lattice is described which gives accurate pressures on the top and part of the bottom surface of the wing. Limited comparisons between the result predicted by this method, the conventional lattice arrangement method, experimental data, and 2-D potential flow analysis techniques are presented.
Baker, Teesha C; Tymm, Fiona J M; Murch, Susan J
2018-01-01
β-N-Methylamino-L-alanine (BMAA) is a naturally occurring non-protein amino acid produced by cyanobacteria, accumulated through natural food webs, found in mammalian brain tissues. Recent evidence indicates an association between BMAA and neurological disease. The accurate detection and quantification of BMAA in food and environmental samples are critical to understanding BMAA metabolism and limiting human exposure. To date, there have been more than 78 reports on BMAA in cyanobacteria and human samples, but different methods give conflicting data and divergent interpretations in the literature. The current work was designed to determine whether orthogonal chromatography and mass spectrometry methods give consistent data interpretation from a single sample matrix using the three most common analytical methods. The methods were recreated as precisely as possible from the literature with optimization of the mass spectrometry parameters specific to the instrument. Four sample matrices, cyanobacteria, human brain, blue crab, and Spirulina, were analyzed as 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate (AQC) derivatives, propyl chloroformate (PCF) derivatives separated by reverse phase chromatography, or underivatized extracts separated by HILIC chromatography. The three methods agreed on positive detection of BMAA in cyanobacteria and no detected BMAA in the sample of human brain matrix. Interpretation was less clear for a sample of blue crab which was strongly positive for BMAA by AQC and PCF but negative by HILIC and for four spirulina raw materials that were negative by PCF but positive by AQC and HILIC. Together, these data demonstrate that the methods gave different results and that the choices in interpretation of the methods determined whether BMAA was detected. Failure to detect BMAA cannot be considered proof of absence.
NASA Astrophysics Data System (ADS)
Parekh, Vishwa S.; Jacobs, Jeremy R.; Jacobs, Michael A.
2014-03-01
The evaluation and treatment of acute cerebral ischemia requires a technique that can determine the total area of tissue at risk for infarction using diagnostic magnetic resonance imaging (MRI) sequences. Typical MRI data sets consist of T1- and T2-weighted imaging (T1WI, T2WI) along with advanced MRI parameters of diffusion-weighted imaging (DWI) and perfusion weighted imaging (PWI) methods. Each of these parameters has distinct radiological-pathological meaning. For example, DWI interrogates the movement of water in the tissue and PWI gives an estimate of the blood flow, both are critical measures during the evolution of stroke. In order to integrate these data and give an estimate of the tissue at risk or damaged; we have developed advanced machine learning methods based on unsupervised non-linear dimensionality reduction (NLDR) techniques. NLDR methods are a class of algorithms that uses mathematically defined manifolds for statistical sampling of multidimensional classes to generate a discrimination rule of guaranteed statistical accuracy and they can generate a two- or three-dimensional map, which represents the prominent structures of the data and provides an embedded image of meaningful low-dimensional structures hidden in their high-dimensional observations. In this manuscript, we develop NLDR methods on high dimensional MRI data sets of preclinical animals and clinical patients with stroke. On analyzing the performance of these methods, we observed that there was a high of similarity between multiparametric embedded images from NLDR methods and the ADC map and perfusion map. It was also observed that embedded scattergram of abnormal (infarcted or at risk) tissue can be visualized and provides a mechanism for automatic methods to delineate potential stroke volumes and early tissue at risk.
Second-Order Perturbation Theory for Generalized Active Space Self-Consistent-Field Wave Functions.
Ma, Dongxia; Li Manni, Giovanni; Olsen, Jeppe; Gagliardi, Laura
2016-07-12
A multireference second-order perturbation theory approach based on the generalized active space self-consistent-field (GASSCF) wave function is presented. Compared with the complete active space (CAS) and restricted active space (RAS) wave functions, GAS wave functions are more flexible and can employ larger active spaces and/or different truncations of the configuration interaction expansion. With GASSCF, one can explore chemical systems that are not affordable with either CASSCF or RASSCF. Perturbation theory to second order on top of GAS wave functions (GASPT2) has been implemented to recover the remaining electron correlation. The method has been benchmarked by computing the chromium dimer ground-state potential energy curve. These calculations show that GASPT2 gives results similar to CASPT2 even with a configuration interaction expansion much smaller than the corresponding CAS expansion.
Geometric integration in Born-Oppenheimer molecular dynamics.
Odell, Anders; Delin, Anna; Johansson, Börje; Cawkwell, Marc J; Niklasson, Anders M N
2011-12-14
Geometric integration schemes for extended Lagrangian self-consistent Born-Oppenheimer molecular dynamics, including a weak dissipation to remove numerical noise, are developed and analyzed. The extended Lagrangian framework enables the geometric integration of both the nuclear and electronic degrees of freedom. This provides highly efficient simulations that are stable and energy conserving even under incomplete and approximate self-consistent field (SCF) convergence. We investigate three different geometric integration schemes: (1) regular time reversible Verlet, (2) second order optimal symplectic, and (3) third order optimal symplectic. We look at energy conservation, accuracy, and stability as a function of dissipation, integration time step, and SCF convergence. We find that the inclusion of dissipation in the symplectic integration methods gives an efficient damping of numerical noise or perturbations that otherwise may accumulate from finite arithmetics in a perfect reversible dynamics. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru
2007-10-01
Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.
NASA Astrophysics Data System (ADS)
Perotti, Juan Ignacio; Tessone, Claudio Juan; Caldarelli, Guido
2015-12-01
The quest for a quantitative characterization of community and modular structure of complex networks produced a variety of methods and algorithms to classify different networks. However, it is not clear if such methods provide consistent, robust, and meaningful results when considering hierarchies as a whole. Part of the problem is the lack of a similarity measure for the comparison of hierarchical community structures. In this work we give a contribution by introducing the hierarchical mutual information, which is a generalization of the traditional mutual information and makes it possible to compare hierarchical partitions and hierarchical community structures. The normalized version of the hierarchical mutual information should behave analogously to the traditional normalized mutual information. Here the correct behavior of the hierarchical mutual information is corroborated on an extensive battery of numerical experiments. The experiments are performed on artificial hierarchies and on the hierarchical community structure of artificial and empirical networks. Furthermore, the experiments illustrate some of the practical applications of the hierarchical mutual information, namely the comparison of different community detection methods and the study of the consistency, robustness, and temporal evolution of the hierarchical modular structure of networks.
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
NASA Astrophysics Data System (ADS)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
Ferromagnetism in LaCoO3 nanoparticles
NASA Astrophysics Data System (ADS)
Zhou, Shiming; Shi, Lei; Zhao, Jiyin; He, Laifa; Yang, Haipeng; Zhang, Shangming
2007-11-01
We have investigated the structural and magnetic properties of LaCoO3 nanoparticles prepared by a sol-gel method. A ferromagnetic order with TC˜85K has been observed in the nanoparticles. The infrared spectra give evidence for a stabilizing of higher spin state and a reduced Jahn-Teller distortion in the nanoparticles with respect to the bulk LaCoO3 , which is consistent with the recent reports in the strained films [Phys. Rev. B 75, 144402 (2007)] and proposed to be the possible origin of the observed ferromagnetic order in LaCoO3 .
Lelièvre, Dominique; Terrier, Victor P; Delmas, Agnès F; Aucagne, Vincent
2016-03-04
The Fmoc-based solid phase synthesis of C-terminal cysteine-containing peptides is problematic, due to side reactions provoked by the pronounced acidity of the Cα proton of cysteine esters. We herein describe a general strategy consisting of the postsynthetic introduction of the C-terminal Cys through a key chemoselective native chemical ligation reaction with N-Hnb-Cys peptide crypto-thioesters. This method was successfully applied to the demanding peptide sequences of two natural products of biological interest, giving remarkably high overall yields compared to that of a state of the art strategy.
Um, Chanchamnan; Chemler, Sherry R
2016-05-20
2-Arylpyrrolidines occur frequently in bioactive compounds, and thus, methods to access them from readily available reagents are valuable. We report a copper-catalyzed intermolecular carboamination of vinylarenes with potassium N-carbamoyl-β-aminoethyltrifluoroborates. The reaction occurs with terminal, 1,2-disubstituted, and 1,1-disubstituted vinylarenes bearing a number of functional groups. 1,3-Dienes are also good substrates, and their reactions give 2-vinylpyrrolidines. Radical clock mechanistic experiments are consistent with the presence of carbon radical intermediates and do not support participation of carbocations.
Absence of first-order unbinding transitions of fluid and polymerized membranes
NASA Technical Reports Server (NTRS)
Grotehans, Stefan; Lipowsky, Reinhard
1990-01-01
Unbinding transitions of fluid and polymerized membranes are studied by renormalization-group (RG) methods. Two different RG schemes are used and found to give rather consistent results. The fixed-point structure of both RG's exhibits a complex behavior as a function of the decay exponent tau for the fluctuation-induced interaction of the membranes. For tau greater than tau(S2) interacting membranes can undergo first-order transitions even in the strong-fluctuation regime. These estimates for tau(S2) imply, however, that both fluid and polymerized membranes unbind in a continuous way in the absence of lateral tension.
Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas
2014-07-01
Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Aveiro method in reproducing kernel Hilbert spaces under complete dictionary
NASA Astrophysics Data System (ADS)
Mai, Weixiong; Qian, Tao
2017-12-01
Aveiro Method is a sparse representation method in reproducing kernel Hilbert spaces (RKHS) that gives orthogonal projections in linear combinations of reproducing kernels over uniqueness sets. It, however, suffers from determination of uniqueness sets in the underlying RKHS. In fact, in general spaces, uniqueness sets are not easy to be identified, let alone the convergence speed aspect with Aveiro Method. To avoid those difficulties we propose an anew Aveiro Method based on a dictionary and the matching pursuit idea. What we do, in fact, are more: The new Aveiro method will be in relation to the recently proposed, the so called Pre-Orthogonal Greedy Algorithm (P-OGA) involving completion of a given dictionary. The new method is called Aveiro Method Under Complete Dictionary (AMUCD). The complete dictionary consists of all directional derivatives of the underlying reproducing kernels. We show that, under the boundary vanishing condition, bring available for the classical Hardy and Paley-Wiener spaces, the complete dictionary enables an efficient expansion of any given element in the Hilbert space. The proposed method reveals new and advanced aspects in both the Aveiro Method and the greedy algorithm.
NASA Astrophysics Data System (ADS)
Zasova, L.; Formisano, V.; Grassi, D.; Igantiev, N.; Moroz, V.
Thermal IR spectrometry is one of the methods of the Martian atmosphere investigation below 55 km. The temperature profiles retrieved from the 15 μm CO2 band may be used for MIRA database. This approach gives the vertical resolution of several kilometers and accuracy of several Kelvins. An aerosol abundance, which influences the temperature profiles, is obtained from the continuum of the same spectrum. It is taken into account in the temperature retrieval procedure in a self- consistent way. Although this method has limited vertical resolution it possesses some advantages. For example, the radio occultation method gives the temperature profiles with higher spectral resolution, but the radio observations are sparse in space and local time. Direct measurements, which give the most accurate results, enable to obtain the temperature profiles only for some chosen points (landing places). Actually, the thermal IR-spectrometry is the only method, which allows to monitor the temperature profiles with good coverage both in space and local time. The first measurements of this kind were fulfilled by IRIS, installed on board of Mariner 9. This spectrometer was characterized by rather high spectral resolution (2.4 cm-1). The temperature profiles vs. local time dependencies for different latitudes and seasons were retrieved, including dust storm conditions, North polar night, Tharsis volcanoes. The obtained temperature profiles have been compared with the temperature profiles for the same conditions, taken from Climate Data Base (European GCM). The Planetary Fourier Spectrometer onboard Mars Express (which is planned to be launched in 2003) has the spectral range 1.2-45 μm and spectral resolution of 1.5 cm- 1. Temperature retrieval is one of the main scientific goals of the experiment. It opens a possibility to get a series of temperature profiles taken for different conditions, which can later be used in MIRA producing.
ERIC Educational Resources Information Center
Wosnitza, Marold S.; Labitzke, Nina; Woods-McConney, Amanda; Karabenick, Stuart A.
2015-01-01
While extensive research on student help-seeking and teachers' help-giving behaviour in teacher-centred classroom and self-directed learning environments is available, little is known regarding teachers' beliefs and behaviour about help seeking or their role when students work in groups. This study investigated primary (elementary) school…
Does Generosity Beget Generosity? Alumni Giving and Undergraduate Financial Aid
ERIC Educational Resources Information Center
Meer, Jonathan; Rosen, Harvey S.
2012-01-01
We investigate how undergraduates' financial aid packages affect their subsequent donative behavior as alumni. We analyze micro data on alumni giving at an anonymous research university, and focus on three types of financial aid, scholarships, loans, and campus jobs. Consistent with the view of some professional fundraisers, we allow the receipt…
14 CFR 221.140 - Method of giving concurrence.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Aviation shall be used by a carrier to give authority to another carrier to issue and file with the... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Method of giving concurrence. 221.140 Section 221.140 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION...
14 CFR 221.140 - Method of giving concurrence.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Aviation shall be used by a carrier to give authority to another carrier to issue and file with the... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Method of giving concurrence. 221.140 Section 221.140 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION...
14 CFR 221.140 - Method of giving concurrence.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Aviation shall be used by a carrier to give authority to another carrier to issue and file with the... 14 Aeronautics and Space 4 2013-01-01 2013-01-01 false Method of giving concurrence. 221.140 Section 221.140 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION...
14 CFR 221.140 - Method of giving concurrence.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Aviation shall be used by a carrier to give authority to another carrier to issue and file with the... 14 Aeronautics and Space 4 2012-01-01 2012-01-01 false Method of giving concurrence. 221.140 Section 221.140 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION...
Classification of burn wounds using support vector machines
NASA Astrophysics Data System (ADS)
Acha, Begona; Serrano, Carmen; Palencia, Sergio; Murillo, Juan Jose
2004-05-01
The purpose of this work is to improve a previous method developed by the authors for the classification of burn wounds into their depths. The inputs of the system are color and texture information, as these are the characteristics observed by physicians in order to give a diagnosis. Our previous work consisted in segmenting the burn wound from the rest of the image and classifying the burn into its depth. In this paper we focus on the classification problem only. We already proposed to use a Fuzzy-ARTMAP neural network (NN). However, we may take advantage of new powerful classification tools such as Support Vector Machines (SVM). We apply the five-folded cross validation scheme to divide the database into training and validating sets. Then, we apply a feature selection method for each classifier, which will give us the set of features that yields the smallest classification error for each classifier. Features used to classify are first-order statistical parameters extracted from the L*, u* and v* color components of the image. The feature selection algorithms used are the Sequential Forward Selection (SFS) and the Sequential Backward Selection (SBS) methods. As data of the problem faced here are not linearly separable, the SVM was trained using some different kernels. The validating process shows that the SVM method, when using a Gaussian kernel of variance 1, outperforms classification results obtained with the rest of the classifiers, yielding an error classification rate of 0.7% whereas the Fuzzy-ARTMAP NN attained 1.6 %.
Radiometric ages for basement rocks from the Emperor Seamounts, ODP Leg 197
NASA Astrophysics Data System (ADS)
Duncan, Robert A.; Keller, Randall A.
2004-08-01
The Hawaiian-Emperor seamount chain is the "type" example of an age-progressive, hot spot-generated intraplate volcanic lineament. However, our current knowledge of the age distribution within this province is based largely on radiometric ages determined several decades ago. Improvements in instrumentation, sample preparation methods, and new material obtained by recent drilling warrant a reexamination of the age relations among the older Hawaiian volcanoes. We report new age determinations (40Ar-39Ar incremental heating method) on whole rocks and feldspar separates from Detroit (Sites 1203 and 1204), Nintoku (Site 1205), and Koko (Site 1206) Seamounts (Ocean Drilling Program (ODP) Leg 197) and Meiji Seamount (Deep Sea Drilling Project (DSDP) Leg 19, Site 192). Plateaus in incremental heating age spectra for Site 1203 lava flows give a mean age of 75.8 ± 0.6 (2σ) Ma, which is consistent with the normal magnetic polarity directions observed and biostratigraphic age assignments. Site 1204 lavas produced discordant spectra, indicating Ar loss by reheating and K mobilization. Six plateau ages from lava flows at Site 1205 give a mean age of 55.6 ± 0.2 Ma, corresponding to Chron 24r. Drilling at Site 1206 intersected a N-R-N magnetic polarity sequence of lava flows, from which six plateau ages give a mean age of 49.1 ± 0.2 Ma, corresponding to the Chron 21n-22r-22n sequence. Plateau ages from two feldspar separates and one lava from DSDP Site 192 range from 34 to 41 Ma, significantly younger than the Cretaceous age of overlying sediments, which we relate to postcrystallization K mobilization. Combined with new dating results from Suiko Seamount (DSDP Site 433) and volcanoes near the prominent bend in the lineament [, 2002], the overall trend is increasing volcano age from south to north along the Emperor Seamounts, consistent with the hot spot model. However, there appear to be important departures from the earlier modeled simple linear age progression, which we relate to changes in Pacific plate motion and the rate of southward motion of the Hawaiian hot spot.
Job analysis and student assessment tool: perfusion education clinical preceptor.
Riley, Jeffrey B
2007-09-01
The perfusion education system centers on the cardiac surgery operating room and the perfusionist teacher who serves as a preceptor for the perfusion student. One method to improve the quality of perfusion education is to create a valid method for perfusion students to give feedback to clinical teachers. The preceptor job analysis consisted of a literature review and interviews with preceptors to list their critical tasks, critical incidents, and cognitive and behavioral competencies. Behaviorally anchored rating traits associated with the preceptors' tasks were identified. Students voted to validate the instrument items. The perfusion instructor rating instrument with a 0-4, "very weak" to "very strong" Likert rating scale was used. The five preceptor traits for student evaluation of clinical instruction (SECI) are as follows: The clinical instructor (1) encourages self-learning, (2) encourages clinical reasoning, (3) meets student's learning needs, (4) gives continuous feedback, and (5) represents a good role model. Scores from 430 student-preceptor relationships for 28 students rotating at 24 affiliate institutions with 134 clinical instructors were evaluated. The mean overall good preceptor average (GPA) was 3.45 +/- 0.76 and was skewed to the left, ranging from 0.0 to 4.0 (median = 3.8). Only 21 of the SECI relationships earned a GPA < 2.0. Analyzing the role of the clinical instructor and performing SECI are methods to provide valid information to improve the quality of a perfusion education program.
NASA Astrophysics Data System (ADS)
Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.
2017-10-01
We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.
The wind of the M-type AGB star RT Virginis probed by VLTI/MIDI
NASA Astrophysics Data System (ADS)
Sacuto, S.; Ramstedt, S.; Höfner, S.; Olofsson, H.; Bladh, S.; Eriksson, K.; Aringer, B.; Klotz, D.; Maercker, M.
2013-03-01
Aims: We study the circumstellar environment of the M-type AGB star RT Vir using mid-infrared high spatial resolution observations from the ESO-VLTI focal instrument MIDI. The aim of this study is to provide observational constraints on theoretical prediction that the winds of M-type AGB objects can be driven by photon scattering on iron-free silicate grains located in the close environment (about 2 to 3 stellar radii) of the star. Methods: We interpreted spectro-interferometric data, first using wavelength-dependent geometric models. We then used a self-consistent dynamic model atmosphere containing a time-dependent description of grain growth for pure forsterite dust particles to reproduce the photometric, spectrometric, and interferometric measurements of RT Vir. Since the hydrodynamic computation needs stellar parameters as input, a considerable effort was first made to determine these parameters. Results: MIDI differential phases reveal the presence of an asymmetry in the stellar vicinity. Results from the geometrical modeling give us clues to the presence of aluminum and silicate dust in the close circumstellar environment (<5 stellar radii). Comparison between spectro-interferometric data and a self-consistent dust-driven wind model reveals that silicate dust has to be present in the region between 2 to 3 stellar radii to reproduce the 59 and 63 m baseline visibility measurements around 9.8 μm. This gives additional observational evidence in favor of winds driven by photon scattering on iron-free silicate grains located in the close vicinity of an M-type star. However, other sources of opacity are clearly missing to reproduce the 10-13 μm visibility measurements for all baselines. Conclusions: This study is a first attempt to understand the wind mechanism of M-type AGB stars by comparing photometric, spectrometric, and interferometric measurements with state-of-the-art, self-consistent dust-driven wind models. The agreement of the dynamic model atmosphere with interferometric measurements in the 8-10 μm spectral region gives additional observational evidence that the winds of M-type stars can be driven by photon scattering on iron-free silicate grains. Finally, a larger statistical study and progress in advanced self-consistent 3D modeling are still required to solve the remaining problems. Based on observations made with the Very Large Telescope Interferometer at Paranal Observatory under programs 083.D-0234 and 086.D-0737 (Open Time Observations).
NASA Astrophysics Data System (ADS)
Tsogbayar, Tsednee; Yeager, Danny L.
2017-01-01
We further apply the complex scaled multiconfigurational spin-tensor electron propagator method (CMCSTEP) for the theoretical determination of resonance parameters with electron-atom systems including open-shell and highly correlated (non-dynamical correlation) atoms and molecules. The multiconfigurational spin-tensor electron propagator method (MCSTEP) developed and implemented by Yeager and his coworkers for real space gives very accurate and reliable ionization potentials and electron affinities. CMCSTEP uses a complex scaled multiconfigurational self-consistent field (CMCSCF) state as an initial state along with a dilated Hamiltonian where all of the electronic coordinates are scaled by a complex factor. CMCSTEP is designed for determining resonances. We apply CMCSTEP to get the lowest 2P (Be-, Mg-) and 2D (Mg-, Ca-) shape resonances using several different basis sets each with several complete active spaces. Many of these basis sets we employ have been used by others with different methods. Hence, we can directly compare results with different methods but using the same basis sets.
A KPI-based process monitoring and fault detection framework for large-scale processes.
Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Yang, Xu; Ding, Steven X; Peng, Kaixiang
2017-05-01
Large-scale processes, consisting of multiple interconnected subprocesses, are commonly encountered in industrial systems, whose performance needs to be determined. A common approach to this problem is to use a key performance indicator (KPI)-based approach. However, the different KPI-based approaches are not developed with a coherent and consistent framework. Thus, this paper proposes a framework for KPI-based process monitoring and fault detection (PM-FD) for large-scale industrial processes, which considers the static and dynamic relationships between process and KPI variables. For the static case, a least squares-based approach is developed that provides an explicit link with least-squares regression, which gives better performance than partial least squares. For the dynamic case, using the kernel representation of each subprocess, an instrument variable is used to reduce the dynamic case to the static case. This framework is applied to the TE benchmark process and the hot strip mill rolling process. The results show that the proposed method can detect faults better than previous methods. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
The Preliminary Results of GMSTech: A Software Development for Microseismic Characterization
NASA Astrophysics Data System (ADS)
Rohaman, Maman; Suhendi, Cahli; Verdhora Ry, Rexha; Sugiartono Prabowo, Billy; Widiyantoro, Sri; Nugraha, Andri Dian; Yudistira, Tedi; Mujihardi, Bambang
2017-04-01
The processing of microseismic data requires reliable software for imaging the condition of subsurface related to occurring microseismicity. In general, the currently available software is only specific for certain processing module and developed by the different developer. However, the software with integrated processing modules will give a better value because the users can use it easier and faster. We developed GMSTech (Ganesha Microseismic Technology), a C# language-based standing-alone software consisting several modules for processing of microseismic data. Its function is to solve a non-linear inverse problem and imaging the subsurface. C# library is supported by ILNumerics to reduce time consumption and give good visualization. In this preliminary result, we will present four developed modules: (1) hypocenter determination, (2) moment magnitude calculation, and (3) 3D seismic tomography. In the first module, we provide four methods for locating the microseismic events that can be chosen by a user independently: simulated annealing method, guided grid-search method, Geiger’s method, and joint hypocenter determination (JHD). The second module can be used for calculating moment magnitude using Brune method and to estimate the released energy of the event. At last, we also provided the module of 3-D seismic tomography for imaging the velocity structures based on delay time tomography. We demonstrated the software using both a synthetic data and a real data from a certain geothermal field in Indonesia. The results for all modules are reliable and remarkable, reviewed statistically by RMS error. We will keep examining the software using another set of data and developing further modules of processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orimoto, Yuuichi, E-mail: orimoto.yuuichi.888@m.kyushu-u.ac.jp; Aoki, Yuriko; Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012
An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method,more » and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.« less
Orimoto, Yuuichi; Aoki, Yuriko
2016-07-14
An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between "choose-maximum" (choose a base pair giving the maximum β for each step) and "choose-minimum" (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.
Comparison of air space measurement imaged by CT, small-animal CT, and hyperpolarized Xe MRI
NASA Astrophysics Data System (ADS)
Madani, Aniseh; White, Steven; Santyr, Giles; Cunningham, Ian
2005-04-01
Lung disease is the third leading cause of death in the western world. Lung air volume measurements are thought to be early indicators of lung disease and markers in pharmaceutical research. The purpose of this work is to develop a lung phantom for assessing and comparing the quantitative accuracy of hyperpolarized xenon 129 magnetic resonance imaging (HP 129Xe MRI), conventional computed tomography (HRCT), and highresolution small-animal CT (μCT) in measuring lung gas volumes. We developed a lung phantom consisting of solid cellulose acetate spheres (1, 2, 3, 4 and 5 mm diameter) uniformly packed in circulated air or HP 129Xe gas. Air volume is estimated based on simple thresholding algorithm. Truth is calculated from the sphere diameters and validated using μCT. While this phantom is not anthropomorphic, it enables us to directly measure air space volume and compare these imaging methods as a function of sphere diameter for the first time. HP 129Xe MRI requires partial volume analysis to distinguish regions with and without 129Xe gas and results are within %5 of truth but settling of the heavy 129Xe gas complicates this analysis. Conventional CT demonstrated partial-volume artifacts for the 1mm spheres. μCT gives the most accurate air-volume results. Conventional CT and HP 129Xe MRI give similar results although non-uniform densities of 129Xe require more sophisticated algorithms than simple thresholding. The threshold required to give the true air volume in both HRCT and μCT, varies with sphere diameters calling into question the validity of thresholding method.
Self-consistent-field study of conduction through conjugated molecules
NASA Astrophysics Data System (ADS)
Paulsson, Magnus; Stafström, Sven
2001-07-01
Current-voltage (I-V) characteristics of individual molecules connected by metallic leads are studied theoretically. Using the Pariser-Parr-Pople quantum chemical method to model the molecule enables us to include electron-electron interactions in the Hartree approximation. The self-consistent-field method is used to calculate charging together with other properties for the total system under bias. Thereafter the Landauer formula is used to calculate the current from the transmission amplitudes. The most important parameter to understand charging is the position of the chemical potentials of the leads in relation to the molecular levels. At finite bias, the main part of the potential drop is located at the molecule-lead junctions. Also, the potential of the molecule is shown to partially follow the chemical potential closest to the highest occupied molecular orbital (HOMO). Therefore, the resonant tunneling steps in the I-V curves are smoothed giving a I-V resembling a ``Coulomb-gap.'' However, the charge of the molecule is not quantized since the molecule is small with quite strong interactions with the leads. The calculations predict an increase in the current at the bias corresponding to the energy gap of the molecule irrespective of the metals used in the leads. When the bias is increased further, charge is redistributed from the HOMO level to the lowest unoccupied molecular orbital of the molecule. This gives a step in the I-V curves and a corresponding change in the potential profile over the molecule. Calculations were mainly performed on polyene molecules. Molecules asymmetrically coupled to the leads model the I-V curves for molecules contacted by a scanning tunneling microscopy tip. I-V curves for pentapyrrole and another molecule that show negative differential conductance are also analyzed. The charging of these two systems depends on the shape of the molecular wave functions.
Beadwork for Children/Weegwahs: An Ojibwe Story and Activities Using Birch Bark.
ERIC Educational Resources Information Center
Minneapolis Public Schools, MN.
Two resource units give elementary students an understanding of American Indian arts and crafts. The first unit consists of seven beading activities for early elementary students using beads the teacher and/or students make themselves. The unit gives a short history of American Indian beadwork, describes the materials and designs used by Plains…
Toward a Safer and Cleaner Way: Dealing With Human Waste in Healthcare.
Apple, Michael
2016-07-01
Organizations must evaluate their infection control plans in a holistic and inclusive manner to continue reducing healthcare-associated infection (HAI) rates, including giving consideration to the manner of collecting and disposing of patient waste. Manual washing of bedpans and other containers poses a risk of spreading infection via caregivers, the environment, and the still-contaminated bedpan. Several alternative disposal methods are available and have been tested in some countries for decades, including options such as bedpan washer-disinfector machines, macerator machines, and disposable bedpans. This article reviews methods and issues related to human waste disposal in healthcare settings. Healthcare organizations must evaluate the options thoroughly and then consistently implement the option most in line with its goals and culture. © The Author(s) 2016.
Solution of the Burnett equations for hypersonic flows near the continuum limit
NASA Technical Reports Server (NTRS)
Imlay, Scott T.
1992-01-01
The INCA code, a three-dimensional Navier-Stokes code for analysis of hypersonic flowfields, was modified to analyze the lower reaches of the continuum transition regime, where the Navier-Stokes equations become inaccurate and Monte Carlo methods become too computationally expensive. The two-dimensional Burnett equations and the three-dimensional rotational energy transport equation were added to the code and one- and two-dimensional calculations were performed. For the structure of normal shock waves, the Burnett equations give consistently better results than Navier-Stokes equations and compare reasonably well with Monte Carlo methods. For two-dimensional flow of Nitrogen past a circular cylinder the Burnett equations predict the total drag reasonably well. Care must be taken, however, not to exceed the range of validity of the Burnett equations.
Formal Consistency Verification of Deliberative Agents with Respect to Communication Protocols
NASA Technical Reports Server (NTRS)
Ramirez, Jaime; deAntonio, Angelica
2004-01-01
The aim of this paper is to show a method that is able to detect inconsistencies in the reasoning carried out by a deliberative agent. The agent is supposed to be provided with a hybrid Knowledge Base expressed in a language called CCR-2, based on production rules and hierarchies of frames, which permits the representation of non-monotonic reasoning, uncertain reasoning and arithmetic constraints in the rules. The method can give a specification of the scenarios in which the agent would deduce an inconsistency. We define a scenario to be a description of the initial agent s state (in the agent life cycle), a deductive tree of rule firings, and a partially ordered set of messages and/or stimuli that the agent must receive from other agents and/or the environment. Moreover, the method will make sure that the scenarios will be valid w.r.t. the communication protocols in which the agent is involved.
Esculin hydrolysis by Gram positive bacteria. A rapid test and it's comparison with other methods.
Qadri, S M; Smith, J C; Zubairi, S; DeSilva, M I
1981-01-01
A number of bacteria hydrolyze esculin enzymatically to esculetin. This characteristic is used by taxonomists and clinical microbiologists in the differentiation and identification of bacteria, especially to distinguish Lance-field group D streptococci from non-group D organisms and Listeria monocytogenes from morphologically similar Erysipelothrix rhusipoathiae and diphtheroids. Conventional methods used for esculin hydrolysis require 4--48 h for completion. We developed and evaluated a medium which gives positive results more rapidly. The 2,330 isolates used in this study consisted of 1,680 esculin positive and 650 esculin negative organisms. The sensitivity and specificity of this method were compared with the PathoTec esculin hydrolysis strip and the procedure of Vaughn and Levine (VL). Of the 1,680 esculin positive organisms, 97% gave positive reactions within 30 minutes with the rapid test whereas PathoTec required 3--4 h incubation for the same number of organisms to yield a positive reaction.
Experimental research on a modular miniaturization nanoindentation device
NASA Astrophysics Data System (ADS)
Huang, Hu; Zhao, Hongwei; Mi, Jie; Yang, Jie; Wan, Shunguang; Yang, Zhaojun; Yan, Jiwang; Ma, Zhichao; Geng, Chunyang
2011-09-01
Nanoindentation technology is developing toward the in situ test which requires miniaturization of indentation instruments. This paper presents a miniaturization nanoindentation device based on the modular idea. It mainly consists of macro-adjusting mechanism, x-y precise positioning platform, z axis precise driving unit, and the load-depth measuring unit. The device can be assembled with different forms and has minimum dimensions of 200 mm × 135 mm × 200 mm. The load resolution is about 0.1 mN and the displacement resolution is about 10 nm. A new calibration method named the reference-mapping method is proposed to calibrate the developed device. Output performance tests and indentation experiments indicate the feasibility of the developed device and calibration method. This paper gives an example that combining piezoelectric actuators with flexure hinge to realize nanoindentation tests. Integrating a smaller displacement sensor, a more compact nanoindentation device can be designed in the future.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Efficient color correction method for smartphone camera-based health monitoring application.
Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong
2017-07-01
Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.
Restoration of out-of-focus images based on circle of confusion estimate
NASA Astrophysics Data System (ADS)
Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto
2002-11-01
In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.
NASA Astrophysics Data System (ADS)
Megan Gillies, D.; Knudsen, D.; Donovan, E.; Jackel, B.; Gillies, R.; Spanswick, E.
2017-08-01
We present a comprehensive survey of 630 nm (red-line) emission discrete auroral arcs using the newly deployed Redline Emission Geospace Observatory. In this study we discuss the need for observations of 630 nm aurora and issues with the large-altitude range of the red-line aurora. We compare field-aligned currents (FACs) measured by the Swarm constellation of satellites with the location of 10 red-line (630 nm) auroral arcs observed by all-sky imagers (ASIs) and find that a characteristic emission height of 200 km applied to the ASI maps gives optimal agreement between the two observations. We also compare the new FAC method against the traditional triangulation method using pairs of all-sky imagers (ASIs), and against electron density profiles obtained from the Resolute Bay Incoherent Scatter Radar-Canadian radar, both of which are consistent with a characteristic emission height of 200 km.
Islam, M T; Trevorah, R M; Appadoo, D R T; Best, S P; Chantler, C T
2017-04-15
We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χ r 2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d 10 ) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7K through 353K. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Islam, M. T.; Trevorah, R. M.; Appadoo, D. R. T.; Best, S. P.; Chantler, C. T.
2017-04-01
We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χr2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d10) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7 K through 353 K.
Optimal solution of full fuzzy transportation problems using total integral ranking
NASA Astrophysics Data System (ADS)
Sam’an, M.; Farikhin; Hariyanto, S.; Surarso, B.
2018-03-01
Full fuzzy transportation problem (FFTP) is a transportation problem where transport costs, demand, supply and decision variables are expressed in form of fuzzy numbers. To solve fuzzy transportation problem, fuzzy number parameter must be converted to a crisp number called defuzzyfication method. In this new total integral ranking method with fuzzy numbers from conversion of trapezoidal fuzzy numbers to hexagonal fuzzy numbers obtained result of consistency defuzzyfication on symmetrical fuzzy hexagonal and non symmetrical type 2 numbers with fuzzy triangular numbers. To calculate of optimum solution FTP used fuzzy transportation algorithm with least cost method. From this optimum solution, it is found that use of fuzzy number form total integral ranking with index of optimism gives different optimum value. In addition, total integral ranking value using hexagonal fuzzy numbers has an optimal value better than the total integral ranking value using trapezoidal fuzzy numbers.
Giving Leads to Happiness in Young Children
Aknin, Lara B.; Hamlin, J. Kiley; Dunn, Elizabeth W.
2012-01-01
Evolutionary models of cooperation require proximate mechanisms that sustain prosociality despite inherent costs to individuals. The “warm glow” that often follows prosocial acts could provide one such mechanism; if so, these emotional benefits may be observable very early in development. Consistent with this hypothesis, the present study finds that before the age of two, toddlers exhibit greater happiness when giving treats to others than receiving treats themselves. Further, children are happier after engaging in costly giving – forfeiting their own resources – than when giving the same treat at no cost. By documenting the emotionally rewarding properties of costly prosocial behavior among toddlers, this research provides initial support for the claim that experiencing positive emotions when giving to others is a proximate mechanism for human cooperation. PMID:22720078
Self-consistent expansion for the molecular beam epitaxy equation
NASA Astrophysics Data System (ADS)
Katzav, Eytan
2002-03-01
Motivated by a controversy over the correct results derived from the dynamic renormalization group (DRG) analysis of the nonlinear molecular beam epitaxy (MBE) equation, a self-consistent expansion for the nonlinear MBE theory is considered. The scaling exponents are obtained for spatially correlated noise of the general form D(r-->-r',t-t')=2D0\\|r-->- r'\\|2ρ-dδ(t-t'). I find a lower critical dimension dc(ρ)=4+2ρ, above which the linear MBE solution appears. Below the lower critical dimension a ρ-dependent strong-coupling solution is found. These results help to resolve the controversy over the correct exponents that describe nonlinear MBE, using a reliable method that proved itself in the past by giving reasonable results for the strong-coupling regime of the Kardar-Parisi-Zhang system (for d>1), where DRG failed to do so.
Self-consistent expansion for the molecular beam epitaxy equation.
Katzav, Eytan
2002-03-01
Motivated by a controversy over the correct results derived from the dynamic renormalization group (DRG) analysis of the nonlinear molecular beam epitaxy (MBE) equation, a self-consistent expansion for the nonlinear MBE theory is considered. The scaling exponents are obtained for spatially correlated noise of the general form D(r-r('),t-t('))=2D(0)[r-->-r(')](2rho-d)delta(t-t(')). I find a lower critical dimension d(c)(rho)=4+2rho, above which the linear MBE solution appears. Below the lower critical dimension a rho-dependent strong-coupling solution is found. These results help to resolve the controversy over the correct exponents that describe nonlinear MBE, using a reliable method that proved itself in the past by giving reasonable results for the strong-coupling regime of the Kardar-Parisi-Zhang system (for d>1), where DRG failed to do so.
Exploration of charity toward busking (street performance) as a function of religion.
Lemay, John O; Bates, Larry W
2013-04-01
To examine conceptions of religion and charity in a new venue--busking (street performance)--103 undergraduate students at a regional university in the southeastern U.S. completed a battery of surveys regarding religion, and attitudes and behaviors toward busking. For those 85 participants who had previously encountered a busker, stepwise regression was used to predict increased frequency of giving to buskers. The best predictive model of giving to buskers consisted of three variables including less experienced irritation toward buskers, prior experience with giving to the homeless, and lower religious fundamentalism.
Yue, Chao-Yan; Ying, Chun-Mei
2017-01-01
To explore the effect of modified enzyme-linked immunosorbent assay on the AMH results is increased or decreased, and to investigate the effect of storage time and temperature on AMH measurements with and without sample premixing assay buffer using the Kangrun ELISA method. Serum AMH concentration were measured by ELISA, consistency between two kits, and comparability between original and the modified assay under different stored conditions were analyzed by Passing-Bablok regression analysis and Bland-Altman bias evaluation. There was a strong consistency between AMH concentrations measured in Kangrun ELISA and Ansh Labs ultra-sensitive AMH ELISA. Pre-mixing serum specimens with assay buffer gave consistent results compared with original assay. Modified protocol can reduce the amplitude of increase affected by sample aged and give the most consistent results regardless of storage conditions. Pre-mixing protocol did not influence the results of fresh serum or frozen serum incubation <3days at 4°C and -80°C, but when specimens detected after collection and stored in other storage conditions, should be pre-mixed with assay buffer to insure its accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.
Modeling Excited States in TiO2 Nanoparticles: On the Accuracy of a TD-DFT Based Description
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berardo, Enrico; Hu, Hanshi; Shevlin, S. A.
2014-03-11
We have investigated the suitability of Time-Dependent Density Functional Theory (TD-DFT) to describe vertical low-energy excitations in naked and hydrated titanium dioxide nanoparticles through a comparison with results from Equation-of-Motion Coupled Cluster (EOM-CC) quantum chemistry methods. We demonstrate that for most TiO2 nanoparticles TD-DFT calculations with commonly used exchange-correlation (XC-)potentials (e.g. B3LYP) and EOM-CC methods give qualitatively similar results. Importantly, however, we also show that for an important subset of structures, TD-DFT gives qualitatively different results depending upon the XC-potential used and that in this case only TD-CAM-B3LYP and TD-BHLYP calculations yield results that are consistent with those obtained usingmore » EOM-CC theory. Moreover, we demonstrate that the discrepancies for such structures arise from a particular combination of defects, excitations involving which are charge-transfer excitations and hence are poorly described by XC-potentials that contain no or low fractions of Hartree-Fock like exchange. Finally, we discuss that such defects are readily healed in the presence of ubiquitously present water and that as a result the description of vertical low-energy excitations for hydrated TiO2 nanoparticles is hence non-problematic.« less
Sum and mean. Standard programs for activation analysis.
Lindstrom, R M
1994-01-01
Two computer programs in use for over a decade in the Nuclear Methods Group at NIST illustrate the utility of standard software: programs widely available and widely used, in which (ideally) well-tested public algorithms produce results that are well understood, and thereby capable of comparison, within the community of users. Sum interactively computes the position, net area, and uncertainty of the area of spectral peaks, and can give better results than automatic peak search programs when peaks are very small, very large, or unusually shaped. Mean combines unequal measurements of a single quantity, tests for consistency, and obtains the weighted mean and six measures of its uncertainty.
Coarse-grained protein-protein stiffnesses and dynamics from all-atom simulations
NASA Astrophysics Data System (ADS)
Hicks, Stephen D.; Henley, C. L.
2010-03-01
Large protein assemblies, such as virus capsids, may be coarse-grained as a set of rigid units linked by generalized (rotational and stretching) harmonic springs. We present an ab initio method to obtain the elastic parameters and overdamped dynamics for these springs from all-atom molecular-dynamics simulations of one pair of units at a time. The computed relaxation times of this pair give a consistency check for the simulation, and we can also find the corrective force needed to null systematic drifts. As a first application we predict the stiffness of an HIV capsid layer and the relaxation time for its breathing mode.
NASA Astrophysics Data System (ADS)
Maggioni, G.; Carturan, S.; Raniero, W.; Riccetto, S.; Sgarbossa, F.; Boldrini, V.; Milazzo, R.; Napoli, D. R.; Scarpa, D.; Andrighetto, A.; Napolitani, E.; De Salvador, D.
2018-03-01
A new method for the formation of hole-barrier contacts in high purity germanium (HPGe) is described, which consists in the sputter deposition of a Sb film on HPGe, followed by Sb diffusion produced through laser annealing of the Ge surface in the melting regime. This process gives rise to a very thin ( ≤ 100 nm) n-doped layer, as determined by SIMS measurement, while preserving the defect-free morphology of HPGe surface. A small prototype of gamma ray detector with a Sb laser-diffused contact was produced and characterized, showing low leakage currents and good spectroscopy data with different gamma ray sources.
Object-oriented code SUR for plasma kinetic simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levchenko, V.D.; Sigov, Y.S.
1995-12-31
We have developed a self-consistent simulation code based on object-oriented model of plasma (OOMP) for solving the Vlasov/Poisson (V/P), Vlasov/Maxwell (V/M), Bhatnagar-Gross-Krook (BGK) as well as Fokker-Planck (FP) kinetic equations. The application of an object-oriented approach (OOA) to simulation of plasmas and plasma-like media by means of splitting methods permits to uniformly describe and solve the wide circle of plasma kinetics problems, including those being very complicated: many-dimensional, relativistic, with regard for collisions, specific boundary conditions etc. This paper gives the brief description of possibilities of the SUR code, as a concrete realization of OOMP.
Estimation of arterial baroreflex sensitivity in relation to carotid artery stiffness.
Lipponen, Jukka A; Tarvainen, Mika P; Laitinen, Tomi; Karjalainen, Pasi A; Vanninen, Joonas; Koponen, Timo; Lyyra-Laitinen, Tiina
2012-01-01
Arterial baroreflex has a significant role in regulating blood pressure. It is known that increased stiffness of the carotid sinus affects mecanotransduction of baroreceptors and therefore limits baroreceptors capability to detect changes in blood pressure. By using high resolution ultrasound video signal and continuous measurement of electrocardiogram (ECG) and blood pressure, it is possible to define elastic properties of artery simultaneously with baroreflex sensitivity parameters. In this paper dataset which consist 38 subjects, 11 diabetics and 27 healthy controls was analyzed. Use of diabetic and healthy test subjects gives wide scale of arteries with different elasticity properties, which provide opportunity to validate baroreflex and artery stiffness estimation methods.
Emulation of rocket trajectory based on a six degree of freedom model
NASA Astrophysics Data System (ADS)
Zhang, Wenpeng; Li, Fan; Wu, Zhong; Li, Rong
2008-10-01
In this paper, a 6-DOF motion mathematical model is discussed. It is consisted of body dynamics and kinematics block, aero dynamics block and atmosphere block. Based on Simulink, the whole rocket trajectory mathematical model is developed. In this model, dynamic system simulation becomes easy and visual. The method of modularization design gives more convenience to transplant. At last, relevant data is given to be validated by Monte Carlo means. Simulation results show that the flight trajectory of the rocket can be simulated preferably by means of this model, and it also supplies a necessary simulating tool for the development of control system.
Hawking Tunneling Radiation of Black Holes in de Sitter and ANTI-de Sitter Spacetimes
NASA Astrophysics Data System (ADS)
Jiang, Qing-Quan; Li, Hui-Ling; Yang, Shu-Zheng; Chen, De-You
Applying Parikh-Wilczek's semiclassical quantum tunneling method, we investigate the tunneling radiation characteristics of a torus-like black hole and Kerr-Newman-Kausya de Sitter black hole. Both black holes have the cosmological constant Λ, but a torus-like black hole is in anti-de Sitter spacetime and the other black hole is in de Sitter spacetime. The derived results show that the tunneling rate is related to the change of Bekenstein-Hawking entropy, and the factual radiated spectrum is not precisely thermal, but is consistent with an underlying unitary theory, which gives a might explanation to the paradox of black hole information lost.
From wheels to wings with evolutionary spiking circuits.
Floreano, Dario; Zufferey, Jean-Christophe; Nicoud, Jean-Daniel
2005-01-01
We give an overview of the EPFL indoor flying project, whose goal is to evolve neural controllers for autonomous, adaptive, indoor micro-flyers. Indoor flight is still a challenge because it requires miniaturization, energy efficiency, and control of nonlinear flight dynamics. This ongoing project consists of developing a flying, vision-based micro-robot, a bio-inspired controller composed of adaptive spiking neurons directly mapped into digital microcontrollers, and a method to evolve such a neural controller without human intervention. This article describes the motivation and methodology used to reach our goal as well as the results of a number of preliminary experiments on vision-based wheeled and flying robots.
Suzuki, Yusuke; Yamada, Kohei; Watanabe, Kentaro; Kochi, Takuya; Ie, Yutaka; Aso, Yoshio; Kakiuchi, Fumitoshi
2017-07-21
A convenient method for the syntheses of dibenzo[h,rst]pentaphenes and dibenzo[fg,qr]pentacenes via the ruthenium-catalyzed chemoselective C-O arylation of 1,4- and 1,5-dimethoxyanthraquinones is described. Dimethoxyanthraquinones reacted selectively with arylboronates at the ortho C-O bonds to give diarylation products. An efficient two-step procedure consisting of a Corey-Chaykofsky reaction and subsequent dehydrative aromatization afforded derivatives of dibenzo[h,rst]pentaphenes and dibenzo[fg,qr]pentacenes. Hole-transporting characteristics were observed for a device with a bottom-contact configuration that was fabricated from one of these polycyclic aromatic hydrocarbons.
Binarization of apodizers by adapted one-dimensional error diffusion method
NASA Astrophysics Data System (ADS)
Kowalczyk, Marek; Cichocki, Tomasz; Martinez-Corral, Manuel; Andres, Pedro
1994-10-01
Two novel algorithms for the binarization of continuous rotationally symmetric real positive pupil filters are presented. Both algorithms are based on 1-D error diffusion concept. The original gray-tone apodizer is substituted by a set of transparent and opaque concentric annular zones. Depending on the algorithm the resulting binary mask consists of either equal width or equal area zones. The diffractive behavior of binary filters is evaluated. It is shown that the pupils with equal width zones give Fraunhofer diffraction pattern more similar to that of the original continuous-tone pupil than those with equal area zones, assuming in both cases the same resolution limit of printing device.
Do Pretests Increase Student Achievement as Measured by Posttests?
ERIC Educational Resources Information Center
Bancroft, Roger J.
This report describes a study of the effects of using pretests in science classes on chapter test achievement results. The targeted population consisted of eighth grade science students at a junior high school from 1992 to 2001. Whether giving a pretest followed by a posttest at the end of the chapter, or giving only the test at chapter end…
Additional Studies on Clothing Treatments for Personal Protection against Biting Flies
1979-09-01
length Jackets with attached hoods, the separate hoods were made of mesh fabric consisting of polyester filaments that give some abrasion resistance and...conditions was carried out using a sling psychrometer and anemometer to give data on dry-bulb temperature, relative humidity and wind speed. Insect specimens...treated the experimental items. Mrs. J. Whalen made the Jackets and hoods. UNCLASSIFIED
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1985-01-01
The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.
[Method of detection of residual tissues in recurrent operations on the thyroid gland].
Gostimskiĭ, A V; Romanchishen, A F; Zaĭtseva, I V; Kuznetsova, Iu V
2014-01-01
A search of residual tissues is complicated in recurrent operations on the thyroid gland. The Saint-Petersburg Centre of Surgery of the Endocrine System and Oncology developed the method of detection of residual tissues of the thyroid gland with the aim of preoperative chromothyroidolymphography under control of ultrasound. The method consisted of US performance during 15-20 minutes before the operation and an introduction of 1% sterile water solution of methylene blue in revealed residual tissues of the thyroid gland. The volume of injected coloring agent was 0.5-2 ml in the residual tissue volume smaller than 9 cm3 and 2-3 ml injected in case of more than 9 cm3. The residual tissues of the thyroid gland accurately visualized during the following operation. Described method gives the possibility to detect all regions of residual tissues which should be removed and at the same time it shortens a revision and surgery trauma.
Applications of rule-induction in the derivation of quantitative structure-activity relationships.
A-Razzak, M; Glen, R C
1992-08-01
Recently, methods have been developed in the field of Artificial Intelligence (AI), specifically in the expert systems area using rule-induction, designed to extract rules from data. We have applied these methods to the analysis of molecular series with the objective of generating rules which are predictive and reliable. The input to rule-induction consists of a number of examples with known outcomes (a training set) and the output is a tree-structured series of rules. Unlike most other analysis methods, the results of the analysis are in the form of simple statements which can be easily interpreted. These are readily applied to new data giving both a classification and a probability of correctness. Rule-induction has been applied to in-house generated and published QSAR datasets and the methodology, application and results of these analyses are discussed. The results imply that in some cases it would be advantageous to use rule-induction as a complementary technique in addition to conventional statistical and pattern-recognition methods.
Applications of rule-induction in the derivation of quantitative structure-activity relationships
NASA Astrophysics Data System (ADS)
A-Razzak, Mohammed; Glen, Robert C.
1992-08-01
Recently, methods have been developed in the field of Artificial Intelligence (AI), specifically in the expert systems area using rule-induction, designed to extract rules from data. We have applied these methods to the analysis of molecular series with the objective of generating rules which are predictive and reliable. The input to rule-induction consists of a number of examples with known outcomes (a training set) and the output is a tree-structured series of rules. Unlike most other analysis methods, the results of the analysis are in the form of simple statements which can be easily interpreted. These are readily applied to new data giving both a classification and a probability of correctness. Rule-induction has been applied to in-house generated and published QSAR datasets and the methodology, application and results of these analyses are discussed. The results imply that in some cases it would be advantageous to use rule-induction as a complementary technique in addition to conventional statistical and pattern-recognition methods.
Consistent assignment of the vibrations of symmetric and asymmetric meta-disubstituted benzenes
NASA Astrophysics Data System (ADS)
Kemp, David J.; Tuttle, William D.; Jones, Florence M. S.; Gardner, Adrian M.; Andrejeva, Anna; Wakefield, Jonathan C. A.; Wright, Timothy G.
2018-04-01
The assignment of vibrational structure in spectra gives valuable insights into geometric and electronic structure changes upon electronic excitation or ionization; particularly when such information is available for families of molecules. We give a description of the phenyl-ring-localized vibrational modes of the ground (S0) electronic states of sets of meta-disubstituted benzene molecules including both symmetrically- and asymmetrically-substituted cases. As in our earlier work on monosubstituted benzenes (Gardner and Wright, 2011), para-disubstituted benzenes (Andrejeva et al., 2016), and ortho-disubstituted benzenes (Tuttle et al., 2018), we conclude that the use of the commonly-used Wilson or Varsányi mode labels, which are based on the vibrational motions of benzene itself, is misleading and ambiguous. Instead, we label the phenyl-ring-localized modes consistently based upon the Mulliken (Herzberg) method for the modes of meta-difluorobenzene (mDFB) under Cs symmetry, since we wish the labelling scheme to cover both symmetrically- and asymmetrically-substituted molecules. By studying the vibrational wavenumbers obtained from the same force-field while varying the mass of the substituent, we are able to follow the evolving modes across a wide range of molecules and hence provide consistent assignments. We assign the vibrations of the following sets of molecules: the symmetric meta-dihalobenzenes, meta-xylene and resorcinol (meta-dihydroxybenzene); and the asymmetric meta-dihalobenzenes, meta-halotoluenes, meta-halophenols and meta-cresol. In the symmetrically-substituted species, we find two pairs of in-phase and out-of-phase carbon-substituent stretches, and this motion persists in asymmetrically-substituted molecules for heavier substituents; however, when at least one of the substituents is light, then we find that these evolve into localized carbon-substituent stretches.
Consistent Estimation of Gibbs Energy Using Component Contributions
Milo, Ron; Fleming, Ronan M. T.
2013-01-01
Standard Gibbs energies of reactions are increasingly being used in metabolic modeling for applying thermodynamic constraints on reaction rates, metabolite concentrations and kinetic parameters. The increasing scope and diversity of metabolic models has led scientists to look for genome-scale solutions that can estimate the standard Gibbs energy of all the reactions in metabolism. Group contribution methods greatly increase coverage, albeit at the price of decreased precision. We present here a way to combine the estimations of group contribution with the more accurate reactant contributions by decomposing each reaction into two parts and applying one of the methods on each of them. This method gives priority to the reactant contributions over group contributions while guaranteeing that all estimations will be consistent, i.e. will not violate the first law of thermodynamics. We show that there is a significant increase in the accuracy of our estimations compared to standard group contribution. Specifically, our cross-validation results show an 80% reduction in the median absolute residual for reactions that can be derived by reactant contributions only. We provide the full framework and source code for deriving estimates of standard reaction Gibbs energy, as well as confidence intervals, and believe this will facilitate the wide use of thermodynamic data for a better understanding of metabolism. PMID:23874165
NASA Astrophysics Data System (ADS)
Maizir, H.; Suryanita, R.
2018-01-01
A few decades, many methods have been developed to predict and evaluate the bearing capacity of driven piles. The problem of the predicting and assessing the bearing capacity of the pile is very complicated and not yet established, different soil testing and evaluation produce a widely different solution. However, the most important thing is to determine methods used to predict and evaluate the bearing capacity of the pile to the required degree of accuracy and consistency value. Accurate prediction and evaluation of axial bearing capacity depend on some variables, such as the type of soil, diameter, and length of pile, etc. The aims of the study of Artificial Neural Networks (ANNs) are utilized to obtain more accurate and consistent axial bearing capacity of a driven pile. ANNs can be described as mapping an input to the target output data. The method using the ANN model developed to predict and evaluate the axial bearing capacity of the pile based on the pile driving analyzer (PDA) test data for more than 200 selected data. The results of the predictions obtained by the ANN model and the PDA test were then compared. This research as the neural network models give a right prediction and evaluation of the axial bearing capacity of piles using neural networks.
Real-Time 3d Reconstruction from Images Taken from AN Uav
NASA Astrophysics Data System (ADS)
Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.
2015-08-01
We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.
NASA Astrophysics Data System (ADS)
Andrejeva, Anna; Gardner, Adrian M.; Tuttle, William D.; Wright, Timothy G.
2016-03-01
We give a description of the phenyl-ring-localized vibrational modes of the ground states of the para-disubstituted benzene molecules including both symmetric and asymmetric cases. In line with others, we quickly conclude that the use of Wilson mode labels is misleading and ambiguous; we conclude the same regarding the related ones of Varsányi. Instead we label the modes consistently based upon the Mulliken (Herzberg) method for the modes of para-difluorobenzene (pDFB). Since we wish the labelling scheme to cover both symmetrically- and asymmetrically-substituted molecules, we apply the Mulliken labelling under C2v symmetry. By studying the variation of the vibrational wavenumbers with mass of the substituent, we are able to identify the corresponding modes across a wide range of molecules and hence provide consistent assignments. Particularly interesting are pairs of vibrations that evolve from in- and out-of-phase motions in pDFB to more localized modes in asymmetric molecules. We consider the para isomers of the following: the symmetric dihalobenzenes, xylene, hydroquinone, the asymmetric dihalobenzenes, halotoluenes, halophenols and cresol.
Goodarzi, Mohammad; Jensen, Richard; Vander Heyden, Yvan
2012-12-01
A Quantitative Structure-Retention Relationship (QSRR) is proposed to estimate the chromatographic retention of 83 diverse drugs on a Unisphere poly butadiene (PBD) column, using isocratic elutions at pH 11.7. Previous work has generated QSRR models for them using Classification And Regression Trees (CART). In this work, Ant Colony Optimization is used as a feature selection method to find the best molecular descriptors from a large pool. In addition, several other selection methods have been applied, such as Genetic Algorithms, Stepwise Regression and the Relief method, not only to evaluate Ant Colony Optimization as a feature selection method but also to investigate its ability to find the important descriptors in QSRR. Multiple Linear Regression (MLR) and Support Vector Machines (SVMs) were applied as linear and nonlinear regression methods, respectively, giving excellent correlation between the experimental, i.e. extrapolated to a mobile phase consisting of pure water, and predicted logarithms of the retention factors of the drugs (logk(w)). The overall best model was the SVM one built using descriptors selected by ACO. Copyright © 2012 Elsevier B.V. All rights reserved.
Cascade flutter analysis with transient response aerodynamics
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Mahajan, Aparajit J.; Keith, Theo G., Jr.; Stefko, George L.
1991-01-01
Two methods for calculating linear frequency domain aerodynamic coefficients from a time marching Full Potential cascade solver are developed and verified. In the first method, the Influence Coefficient, solutions to elemental problems are superposed to obtain the solutions for a cascade in which all blades are vibrating with a constant interblade phase angle. The elemental problem consists of a single blade in the cascade oscillating while the other blades remain stationary. In the second method, the Pulse Response, the response to the transient motion of a blade is used to calculate influence coefficients. This is done by calculating the Fourier Transforms of the blade motion and the response. Both methods are validated by comparison with the Harmonic Oscillation method and give accurate results. The aerodynamic coefficients obtained from these methods are used for frequency domain flutter calculations involving a typical section blade structural model. An eigenvalue problem is solved for each interblade phase angle mode and the eigenvalues are used to determine aeroelastic stability. Flutter calculations are performed for two examples over a range of subsonic Mach numbers.
De Mattia, Fabrizio; Chapsal, Jean-Michel; Descamps, Johan; Halder, Marlies; Jarrett, Nicholas; Kross, Imke; Mortiaux, Frederic; Ponsar, Cecile; Redhead, Keith; McKelvie, Jo; Hendriksen, Coenraad
2011-01-01
Current batch release testing of established vaccines emphasizes quality control of the final product and is often characterized by extensive use of animals. This report summarises the discussions of a joint ECVAM/EPAA workshop on the applicability of the consistency approach for routine release of human and veterinary vaccines and its potential to reduce animal use. The consistency approach is based upon thorough characterization of the vaccine during development and the principle that the quality of subsequent batches is the consequence of the strict application of a quality system and of a consistent production of batches. The concept of consistency of production is state-of-the-art for new-generation vaccines, where batch release is mainly based on non-animal methods. There is now the opportunity to introduce the approach into established vaccine production, where it has the potential to replace in vivo tests with non-animal tests designed to demonstrate batch quality while maintaining the highest quality standards. The report indicates how this approach may be further developed for application to established human and veterinary vaccines and emphasizes the continuing need for co-ordination and harmonization. It also gives recommendations for work to be undertaken in order to encourage acceptance and implementation of the consistency approach. Copyright © 2011. Published by Elsevier Ltd.. All rights reserved.
Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard
Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton
2017-01-01
The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385
Krimmel, R.M.
1999-01-01
Net mass balance has been measured since 1958 at South Cascade Glacier using the 'direct method,' e.g. area averages of snow gain and firn and ice loss at stakes. Analysis of cartographic vertical photography has allowed measurement of mass balance using the 'geodetic method' in 1970, 1975, 1977, 1979-80, and 1985-97. Water equivalent change as measured by these nearly independent methods should give similar results. During 1970-97, the direct method shows a cumulative balance of about -15 m, and the geodetic method shows a cumulative balance of about -22 m. The deviation between the two methods is fairly consistent, suggesting no gross errors in either, but rather a cumulative systematic error. It is suspected that the cumulative error is in the direct method because the geodetic method is based on a non-changing reference, the bedrock control, whereas the direct method is measured with reference to only the previous year's summer surface. Possible sources of mass loss that are missing from the direct method are basal melt, internal melt, and ablation on crevasse walls. Possible systematic measurement errors include under-estimation of the density of lost material, sinking stakes, or poorly represented areas.
The renormalization scale-setting problem in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Xing-Gang; Brodsky, Stanley J.; Mojaza, Matin
2013-09-01
A key problem in making precise perturbative QCD predictions is to set the proper renormalization scale of the running coupling. The conventional scale-setting procedure assigns an arbitrary range and an arbitrary systematic error to fixed-order pQCD predictions. In fact, this ad hoc procedure gives results which depend on the choice of the renormalization scheme, and it is in conflict with the standard scale-setting procedure used in QED. Predictions for physical results should be independent of the choice of the scheme or other theoretical conventions. We review current ideas and points of view on how to deal with the renormalization scalemore » ambiguity and show how to obtain renormalization scheme- and scale-independent estimates. We begin by introducing the renormalization group (RG) equation and an extended version, which expresses the invariance of physical observables under both the renormalization scheme and scale-parameter transformations. The RG equation provides a convenient way for estimating the scheme- and scale-dependence of a physical process. We then discuss self-consistency requirements of the RG equations, such as reflexivity, symmetry, and transitivity, which must be satisfied by a scale-setting method. Four typical scale setting methods suggested in the literature, i.e., the Fastest Apparent Convergence (FAC) criterion, the Principle of Minimum Sensitivity (PMS), the Brodsky–Lepage–Mackenzie method (BLM), and the Principle of Maximum Conformality (PMC), are introduced. Basic properties and their applications are discussed. We pay particular attention to the PMC, which satisfies all of the requirements of RG invariance. Using the PMC, all non-conformal terms associated with the β-function in the perturbative series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC provides the principle underlying the BLM method, since it gives the general rule for extending BLM up to any perturbative order; in fact, they are equivalent to each other through the PMC–BLM correspondence principle. Thus, all the features previously observed in the BLM literature are also adaptable to the PMC. The PMC scales and the resulting finite-order PMC predictions are to high accuracy independent of the choice of the initial renormalization scale, and thus consistent with RG invariance. The PMC is also consistent with the renormalization scale-setting procedure for QED in the zero-color limit. The use of the PMC thus eliminates a serious systematic scale error in perturbative QCD predictions, greatly improving the precision of empirical tests of the Standard Model and their sensitivity to new physics.« less
NASA Astrophysics Data System (ADS)
Varadharajan, Ramanathan; Leermakers, Frans A. M.
2018-01-01
Bending rigidities of tensionless balanced liquid-liquid interfaces as occurring in microemulsions are predicted using self-consistent field theory for molecularly inhomogeneous systems. Considering geometries with scale invariant curvature energies gives unambiguous bending rigidities for systems with fixed chemical potentials: the minimal surface I m 3 m cubic phase is used to find the Gaussian bending rigidity κ ¯, and a torus with Willmore energy W =2 π2 allows for direct evaluation of the mean bending modulus κ . Consistent with this, the spherical droplet gives access to 2 κ +κ ¯. We observe that κ ¯ tends to be negative for strong segregation and positive for weak segregation, a finding which is instrumental for understanding phase transitions from a lamellar to a spongelike microemulsion. Invariably, κ remains positive and increases with increasing strength of segregation.
The effects of resonances on time delay estimation for water leak detection in plastic pipes
NASA Astrophysics Data System (ADS)
Almeida, Fabrício C. L.; Brennan, Michael J.; Joseph, Phillip F.; Gao, Yan; Paschoalini, Amarildo T.
2018-04-01
In the use of acoustic correlation methods for water leak detection, sensors are placed at pipe access points either side of a suspected leak, and the peak in the cross-correlation function of the measured signals gives the time difference (delay) between the arrival times of the leak noise at the sensors. Combining this information with the speed at which the leak noise propagates along the pipe, gives an estimate for the location of the leak with respect to one of the measurement positions. It is possible for the structural dynamics of the pipe system to corrupt the time delay estimate, which results in the leak being incorrectly located. In this paper, data from test-rigs in the United Kingdom and Canada are used to demonstrate this phenomenon, and analytical models of resonators are coupled with a pipe model to replicate the experimental results. The model is then used to investigate which of the two commonly used correlation algorithms, the Basic Cross-Correlation (BCC) function or the Phase Transform (PHAT), is more robust to the undesirable structural dynamics of the pipe system. It is found that time delay estimation is highly sensitive to the frequency bandwidth over which the analysis is conducted. Moreover, it is found that the PHAT is particularly sensitive to the presence of resonances and can give an incorrect time delay estimate, whereas the BCC function is found to be much more robust, giving a consistently accurate time delay estimate for a range of dynamic conditions.
Why Philosophical Ethics in School: Implications for Education in Technology and in General
ERIC Educational Resources Information Center
Gardelli, Viktor; Alerby, Eva; Persson, Anders
2014-01-01
In this article, we distinguish between three approaches to ethics in school, each giving an interpretation of the expression "ethics in school": the "descriptive facts about ethics approach," roughly consisting of teaching empirical facts about moral matters to students; the "moral fostering approach," consisting of…
Comparison of Two Acoustic Waveguide Methods for Determining Liner Impedance
NASA Technical Reports Server (NTRS)
Jones, Michael G.; Watson, Willie R.; Tracy, Maureen B.; Parrott, Tony L.
2001-01-01
Acoustic measurements taken in a flow impedance tube are used to assess the relative accuracy of two waveguide methods for impedance eduction in the presence of grazing flow. The aeroacoustic environment is assumed to contain forward and backward-traveling acoustic waves, consisting of multiple modes, and uniform mean flow. Both methods require a measurement of the complex acoustic pressure profile over the length of the test liner. The Single Mode Method assumes that the sound pressure level and phase decay-rates of a single progressive mode can be extracted from this measured complex acoustic pressure profile. No a priori assumptions are made in the Finite Element. Method regarding the modal or reflection content in the measured acoustic pressure profile. The integrity of each method is initially demonstrated by how well their no-flow impedances match those acquired in a normal incidence impedance tube. These tests were conducted using ceramic tubular and conventional perforate liners. Ceramic tubular liners were included because of their impedance insensitivity to mean flow effects. Conversely, the conventional perforate liner was included because its impedance is known to be sensitive to mean flow velocity effects. Excellent comparisons between impedance values educed with the two waveguide methods in the absence of mean flow and the corresponding values educed with the normal incident impedance tube were observed. The two methods are then compared for mean flow Mach numbers up to 0.5, and are shown to give consistent results for both types of test liners. The quality of the results indicates that the Single Mode Method should be used when the measured acoustic pressure profile is clearly dominated by a single progressive mode, and the Finite Element Method should be used for all other cases.
Job Analysis and Student Assessment Tool: Perfusion Education Clinical Preceptor
Riley, Jeffrey B.
2007-01-01
Abstract: The perfusion education system centers on the cardiac surgery operating room and the perfusionist teacher who serves as a preceptor for the perfusion student. One method to improve the quality of perfusion education is to create a valid method for perfusion students to give feedback to clinical teachers. The preceptor job analysis consisted of a literature review and interviews with preceptors to list their critical tasks, critical incidents, and cognitive and behavioral competencies. Behaviorally anchored rating traits associated with the preceptors’ tasks were identified. Students voted to validate the instrument items. The perfusion instructor rating instrument with a 0–4, “very weak” to “very strong” Likert rating scale was used. The five preceptor traits for student evaluation of clinical instruction (SECI) are as follows: The clinical instructor (1) encourages self-learning, (2) encourages clinical reasoning, (3) meets student’s learning needs, (4) gives continuous feedback, and (5) represents a good role model. Scores from 430 student–preceptor relationships for 28 students rotating at 24 affiliate institutions with 134 clinical instructors were evaluated. The mean overall good preceptor average (GPA) was 3.45 ± 0.76 and was skewed to the left, ranging from 0.0 to 4.0 (median = 3.8). Only 21 of the SECI relationships earned a GPA <2.0. Analyzing the role of the clinical instructor and performing SECI are methods to provide valid information to improve the quality of a perfusion education program. PMID:17972453
Thermodynamics and Hawking radiation of five-dimensional rotating charged Goedel black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu Shuangqing; Peng Junjin; College of Science, Wuhan Textile University, Wuhan, Hubei 430074
2011-02-15
We study the thermodynamics of Goedel-type rotating charged black holes in five-dimensional minimal supergravity. These black holes exhibit some peculiar features such as the presence of closed timelike curves and the absence of a globally spatial-like Cauchy surface. We explicitly compute their energies, angular momenta, and electric charges that are consistent with the first law of thermodynamics. Besides, we extend the covariant anomaly cancellation method, as well as the approach of the effective action, to derive their Hawking fluxes. Both the methods of the anomaly cancellation and the effective action give the same Hawking fluxes as those from the Planckmore » distribution for blackbody radiation in the background of the charged rotating Goedel black holes. Our results further support that Hawking radiation is a quantum phenomenon arising at the event horizon.« less
Ultrasonic ranging for the oculometer
NASA Technical Reports Server (NTRS)
Guy, W. J.
1981-01-01
Ultrasonic tracking techniques are investigated for an oculometer. Two methods are reported in detail. The first is based on measurements of time from the start of a transmit burst to a received echo. Knowing the sound velocity, distance can be calculated. In the second method, a continuous signal is transmitted. Target movement causes phase shifting of the echo. By accumulating these phase shifts, tracking from a set point can be achieved. Both systems have problems with contoured targets, but work well on flat plates and the back of a human head. Also briefly reported is an evaluation of an ultrasonic ranging system. Interface circuits make this system compatible with the echo time design. While the system is consistently accurate, it has a beam too narrow for oculometer use. Finally, comments are provided on a tracking system using the Doppler frequency shift to give range data.
NASA Astrophysics Data System (ADS)
Kearney, Patrick A.; Slaughter, J. M.; Powers, K. D.; Falco, Charles M.
1988-01-01
Roughness measurements were made on uncoated silicon wafers and float glass using a WYKO TOPO-3D phase shifting interferometry, and the results are reported. The wafers are found to be slightly smoother than the flat glass. The effects of different cleaning methods and of the deposition of silicon 'buffer layers' on substrate roughness are examined. An acid cleaning method is described which gives more consistent results than detergent cleaning. Healing of the roughness due to sputtered silicon buffer layers was not observed on the length scale probed by the WYKO. Sputtered multilayers are characterized using both the WYKO interferometer and low-angle X-ray diffraction in order to yield information about the roughness of the top surface and of the multilayer interfaces. Preliminary results on film growth using molecular beam epitaxy are also presented.
Cosmological constraints from strong gravitational lensing in clusters of galaxies.
Jullo, Eric; Natarajan, Priyamvada; Kneib, Jean-Paul; D'Aloisio, Anson; Limousin, Marceau; Richard, Johan; Schimd, Carlo
2010-08-20
Current efforts in observational cosmology are focused on characterizing the mass-energy content of the universe. We present results from a geometric test based on strong lensing in galaxy clusters. Based on Hubble Space Telescope images and extensive ground-based spectroscopic follow-up of the massive galaxy cluster Abell 1689, we used a parametric model to simultaneously constrain the cluster mass distribution and dark energy equation of state. Combining our cosmological constraints with those from x-ray clusters and the Wilkinson Microwave Anisotropy Probe 5-year data gives Omega(m) = 0.25 +/- 0.05 and w(x) = -0.97 +/- 0.07, which are consistent with results from other methods. Inclusion of our method with all other available techniques brings down the current 2sigma contours on the dark energy equation-of-state parameter w(x) by approximately 30%.
Aerial images visual localization on a vector map using color-texture segmentation
NASA Astrophysics Data System (ADS)
Kunina, I. A.; Teplyakov, L. M.; Gladkov, A. P.; Khanipov, T. M.; Nikolaev, D. P.
2018-04-01
In this paper we study the problem of combining UAV obtained optical data and a coastal vector map in absence of satellite navigation data. The method is based on presenting the territory as a set of segments produced by color-texture image segmentation. We then find such geometric transform which gives the best match between these segments and land and water areas of the georeferenced vector map. We calculate transform consisting of an arbitrary shift relatively to the vector map and bound rotation and scaling. These parameters are estimated using the RANSAC algorithm which matches the segments contours and the contours of land and water areas of the vector map. To implement this matching we suggest computing shape descriptors robust to rotation and scaling. We performed numerical experiments demonstrating the practical applicability of the proposed method.
Food-Sharing Networks in Lamalera, Indonesia: Status, Sharing, and Signaling
Nolin, David A.
2012-01-01
Costly signaling has been proposed as a possible mechanism to explain food sharing in foraging populations. This sharing-as-signaling hypothesis predicts an association between sharing and status. Using exponential random graph modeling (ERGM), this prediction is tested on a social network of between-household food-sharing relationships in the fishing and sea-hunting village of Lamalera, Indonesia. Previous analyses (Nolin 2010) have shown that most sharing in Lamalera is consistent with reciprocal altruism. The question addressed here is whether any additional variation may be explained as sharing-as-signaling by high-status households. The results show that high-status households both give and receive more than other households, a pattern more consistent with reciprocal altruism than costly signaling. However, once the propensity to reciprocate and household productivity are controlled, households of men holding leadership positions show greater odds of unreciprocated giving when compared to households of non-leaders. This pattern of excessive giving by leaders is consistent with the sharing-as-signaling hypothesis. Wealthy households show the opposite pattern, giving less and receiving more than other households. These households may reciprocate in a currency other than food or their wealth may attract favor-seeking behavior from others. Overall, status covariates explain little variation in the sharing network as a whole, and much of the sharing observed by high-status households is best explained by the same factors that explain sharing by other households. This pattern suggests that multiple mechanisms may operate simultaneously to promote sharing in Lamalera and that signaling may motivate some sharing by some individuals even within sharing regimes primarily maintained by other mechanisms. PMID:22822299
Development of reproducible assays for polygalacturonase and pectinase.
Li, Qian; Coffman, Anthony M; Ju, Lu-Kwang
2015-05-01
Polygalacturonase and pectinase activities reported in the literature were measured by several different procedures. These procedures do not give comparable results, partly owing to the complexity of the substrates involved. This work was aimed at developing consistent and efficient assays for polygalacturonase and pectinase activities, using polygalacturonic acid and citrus pectin, respectively, as the substrate. Different enzyme mixtures produced by Aspergillus niger and Trichoderma reesei with different inducing carbon sources were used for the method development. A series of experiments were conducted to evaluate the incubation time, substrate concentration, and enzyme dilution. Accordingly, for both assays the recommended (optimal) hydrolysis time is 30min and substrate concentration is 5g/L. For polygalacturonase, the sample should be adjusted to have 0.3-0.8U/mL polygalacturonase activity, because in this range the assay outcomes were consistent (independent of dilution factors). Such a range did not exist for the pectinase assay. The recommended procedure is to assay the sample at multiple (at least 2) dilution factors and determine, by linear interpolation, the dilution factor that would release reducing sugar equivalent to 0.4g/L d-galacturonic acid, and then calculate the activity of the sample accordingly (dilution factor×0.687U/mL). Validation experiments showed consistent results using these assays. Effects of substrate preparation methods were also examined. Copyright © 2015 Elsevier Inc. All rights reserved.
Local dark matter and dark energy as estimated on a scale of ~1 Mpc in a self-consistent way
NASA Astrophysics Data System (ADS)
Chernin, A. D.; Teerikorpi, P.; Valtonen, M. J.; Dolgachev, V. P.; Domozhilova, L. M.; Byrd, G. G.
2009-12-01
Context: Dark energy was first detected from large distances on gigaparsec scales. If it is vacuum energy (or Einstein's Λ), it should also exist in very local space. Here we discuss its measurement on megaparsec scales of the Local Group. Aims: We combine the modified Kahn-Woltjer method for the Milky Way-M 31 binary and the HST observations of the expansion flow around the Local Group in order to study in a self-consistent way and simultaneously the local density of dark energy and the dark matter mass contained within the Local Group. Methods: A theoretical model is used that accounts for the dynamical effects of dark energy on a scale of ~1 Mpc. Results: The local dark energy density is put into the range 0.8-3.7ρv (ρv is the globally measured density), and the Local Group mass lies within 3.1-5.8×1012 M⊙. The lower limit of the local dark energy density, about 4/5× the global value, is determined by the natural binding condition for the group binary and the maximal zero-gravity radius. The near coincidence of two values measured with independent methods on scales differing by ~1000 times is remarkable. The mass ~4×1012 M⊙ and the local dark energy density ~ρv are also consistent with the expansion flow close to the Local Group, within the standard cosmological model. Conclusions: One should take into account the dark energy in dynamical mass estimation methods for galaxy groups, including the virial theorem. Our analysis gives new strong evidence in favor of Einstein's idea of the universal antigravity described by the cosmological constant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mörtsell, E., E-mail: edvard@fysik.su.se
The bimetric generalization of general relativity has been proven to be able to give an accelerated background expansion consistent with observations. Apart from the energy densities coupling to one or both of the metrics, the expansion will depend on the cosmological constant contribution to each of them, as well as the three parameters describing the interaction between the two metrics. Even for fixed values of these parameters can several possible solutions, so called branches, exist. Different branches can give similar background expansion histories for the observable metric, but may have different properties regarding, for example, the existence of ghosts andmore » the rate of structure growth. In this paper, we outline a method to find viable solution branches for arbitrary parameter values. We show how possible expansion histories in bimetric gravity can be inferred qualitatively, by picturing the ratio of the scale factors of the two metrics as the spatial coordinate of a particle rolling along a frictionless track. A particularly interesting example discussed is a specific set of parameter values, where a cosmological dark matter background is mimicked without introducing ghost modes into the theory.« less
Super-exchange in transition-metal oxides
NASA Astrophysics Data System (ADS)
Harrison, Walter
2007-03-01
Using contemporary tight-binding theory and parameters[1]. Anderson's perturbation approach [2] gives a qualitatively correct energy difference (a factor 2.3 too high) between ferromagnetic and antiferromagnetic configurations for MnO, It corresponds to a Heisenberg model with J2/J1= 11/7. Perturbation theory fails as the energy denominator gets smaller for FeO and CoO, and changes sign for NiO. Use of the special- points method to treat exchange-split bands gives smaller values not well characterized by a J1 and J2. Carrying it out self-consistently reorders the NiO levels and leads to still smaller energy differences near experiment for all four oxides, as estimated from the experimental N'eel temperature TN , The theory predicts a variation with pressure corresponding to (d/ TN)TN/d = -12.2 for MnO , near experiment, dropping to -9.1 for NiO. The theory is applicable also to the paramagnetic susceptibility. [1] Walter A. Harrison, Elementary Electronic Structure, World Scientific (Singapore, 1999), revised edition (2004). [2] P. W. Anderson, Phys. Rev. 115, 2 (1959).
Practices in Human Dignity in Palliative Care: A Qualitative Study.
Akin Korhan, Esra; Üstün, Çağatay; Uzelli Yilmaz, Derya
Respecting and valuing an individual's existential dignity forms the basis of nursing and medical practice and of nursing care. The objective of the study was to determine the approach to human dignity that nurses and physicians have while providing palliative care. This qualitative study was performed using a phenomenological research design. In-depth semistructured interviews were conducted in 9 nurses and 5 physicians with human dignity approach in palliative care. Following the qualitative Colaizzi method of analyzing the data, the statements made by the nurses and physicians during the interviews were grouped under 8 categories. Consistent with the questionnaire format, 8 themes and 43 subthemes of responses were determined describing the human dignity of the nurse and the physicians. The results of the study showed that in some of the decisions and practices of the nurses giving nursing care and physicians giving medical care to palliative care patients, while they displayed ethically sensitive behavior, on some points, they showed approaches that violated human dignity and showed lack of awareness of ethical, medical, and social responsibilities.
Cosmological histories in bimetric gravity: a graphical approach
NASA Astrophysics Data System (ADS)
Mörtsell, E.
2017-02-01
The bimetric generalization of general relativity has been proven to be able to give an accelerated background expansion consistent with observations. Apart from the energy densities coupling to one or both of the metrics, the expansion will depend on the cosmological constant contribution to each of them, as well as the three parameters describing the interaction between the two metrics. Even for fixed values of these parameters can several possible solutions, so called branches, exist. Different branches can give similar background expansion histories for the observable metric, but may have different properties regarding, for example, the existence of ghosts and the rate of structure growth. In this paper, we outline a method to find viable solution branches for arbitrary parameter values. We show how possible expansion histories in bimetric gravity can be inferred qualitatively, by picturing the ratio of the scale factors of the two metrics as the spatial coordinate of a particle rolling along a frictionless track. A particularly interesting example discussed is a specific set of parameter values, where a cosmological dark matter background is mimicked without introducing ghost modes into the theory.
Kβ/ Kα intensity ratios for X-ray production in 3d metals by gamma-rays and protons
NASA Astrophysics Data System (ADS)
Bhuinya, C. R.; Padhi, H. C.
1994-04-01
Systematic measurements of Kβ/ Kα intensity ratios for X-ray production in 3d metals have been carried out using γ-ray and fast proton ionization methods. The measured ratios from proton ionization experiments indicate production of multivacancies in the L shell giving rise to higher Kβ/ Kα ratios compared to the present γRF results and 2 MeV proton ionization results of Perujo et al. [Perujo A., Maxwell J. A., Teesdale W. J. and Cambell J. L. (1987) J. Phys. B: Atom. Molec. Phys.20, 4973]. This is consistent with the SCA model calculation which gives increased simultaneous K- and L-shell ionization at 4 MeV. The present results from γRF experiments are in close agreement with the 2 MeV proton ionization results of Perujo et al. (1987) and also with the theoretical calculation of jankowski and Polasik [Jankowski K. and Polasik M. (1989) J. Phys. B: Atom. Molec. Optic. Phys. 22, 2369] but the theoretical results of Scofield [Scofield J. H. (1974a) Atom. Data Nucl. Data Tables14, 12] are somewhat higher.
Separating Iso-Propanol-Toluene mixture by azeotropic distillation
NASA Astrophysics Data System (ADS)
Iqbal, Asma; Ahmad, Syed Akhlaq
2018-05-01
The separation of Iso-Propanol-Toluene azeotropic mixture using Acetone as an entrainer has been simulated on Aspen Plus software package using rigorous methods. Calculations of the vapor-liquid equilibrium for the binary system are done using UNIQUAC-RK model which gives a good agreement with the experimental data reported in literature. The effects of the Reflux ratio (RR), distillate-to-feed molar ratio (D/F), feed stage, solvent feed stage, Total no. of stages and solvent feed temperature on the product purities and recoveries are studied to obtain their optimum values that give the maximum purity and recovery of products. The configuration consists of 20 theoretical stages with an equimolar feed of binary mixture. The desired separation of binary mixture has been achieved at the feed stage and an entrainer feeding stage of 15 and 12 respectively with the reflux ratios of 2.5 and 4.0, and D/F ratio of 0.75 and 0.54 respectively in the two columns. The simulation results thus obtained are useful to setup the optimal column configuration of the azeotropic distillation process.
A Kinematically Consistent Two-Point Correlation Function
NASA Technical Reports Server (NTRS)
Ristorcelli, J. R.
1998-01-01
A simple kinematically consistent expression for the longitudinal two-point correlation function related to both the integral length scale and the Taylor microscale is obtained. On the inner scale, in a region of width inversely proportional to the turbulent Reynolds number, the function has the appropriate curvature at the origin. The expression for two-point correlation is related to the nonlinear cascade rate, or dissipation epsilon, a quantity that is carried as part of a typical single-point turbulence closure simulation. Constructing an expression for the two-point correlation whose curvature at the origin is the Taylor microscale incorporates one of the fundamental quantities characterizing turbulence, epsilon, into a model for the two-point correlation function. The integral of the function also gives, as is required, an outer integral length scale of the turbulence independent of viscosity. The proposed expression is obtained by kinematic arguments; the intention is to produce a practically applicable expression in terms of simple elementary functions that allow an analytical evaluation, by asymptotic methods, of diverse functionals relevant to single-point turbulence closures. Using the expression devised an example of the asymptotic method by which functionals of the two-point correlation can be evaluated is given.
Phonon dispersions, band structures, and dielectric functions of BeO and BeS polymorphs
NASA Astrophysics Data System (ADS)
Wang, Ke-Long; Gao, Shang-Peng
2018-07-01
Structures, phonon dispersions, electronic structures, and dielectric functions of beryllium oxide (BeO) and beryllium sulfide (BeS) polymorphs are investigated by density functional theory and many-body perturbation theory. Phonon calculations indicate that both wurtzite (w-) and zincblende (zb-) structures are dynamically stable for BeO and BeS, whereas rocksalt (rs-) structures for both BeO and BeS have imaginary phonon frequencies and thus are dynamically unstable at zero pressure. Band structures for the 4 dynamically stable phases show that only w-BeO has a direct band gap. Both the one-shot G0W0 and quasiparticle self-consistent GW methods are used to correct band energies at high symmetry k-points. Bethe-Salpeter equation (BSE), which considers Coulomb correlated electron-hole pairs, is employed to deal with the computation of macroscopic dielectric functions. It is shown that BSE calculation, employing scissors operator derived by self-consistent GW method, can give dielectric functions agreeing very well with experimental measurement of w-BeO. Weak anisotropic characters can be observed for w-BeO and w-BeS. Both zb-BeS and w-BeS show high optical transition probabilities within a narrow ultraviolet energy range.
Henrichs, K
2011-03-01
Besides ongoing developments in the dosimetry of incorporated radionuclides, there are various efforts to improve the monitoring of workers for potential or real intakes of radionuclides. The disillusioning experience with numerous intercomparison projects identified substantial differences between national regulations, concepts, applied programmes and methods, and dose assessment procedures. Measured activities were not directly comparable because of significant differences between measuring frequencies and methods, but also results of case studies for dose assessments revealed differences of orders of magnitude. Besides the general common interest in reliable monitoring results, at least the cross-border activities of workers (e.g. nuclear power plant services) require consistent approaches and comparable results. The International Standardization Organization therefore initiated projects to standardise programmes for the monitoring of workers, the requirements for measuring laboratories and the processes for the quantitative evaluation of monitoring results in terms of internal assessed doses. The strength of the concepts applied by the international working group consists in a unified approach defining the requirements, databases and processes. This paper is intended to give a short introduction into the standardization project followed by a more detailed description of the dose assessment standard, which will be published in the very near future.
Novel algorithm by low complexity filter on retinal vessel segmentation
NASA Astrophysics Data System (ADS)
Rostampour, Samad
2011-10-01
This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained
Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis
Ollenschläger, Malte; Roth, Nils; Klucken, Jochen
2017-01-01
Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis. PMID:28832511
Stochastic reconstructions of spectral functions: Application to lattice QCD
NASA Astrophysics Data System (ADS)
Ding, H.-T.; Kaczmarek, O.; Mukherjee, Swagato; Ohno, H.; Shu, H.-T.
2018-05-01
We present a detailed study of the applications of two stochastic approaches, stochastic optimization method (SOM) and stochastic analytical inference (SAI), to extract spectral functions from Euclidean correlation functions. SOM has the advantage that it does not require prior information. On the other hand, SAI is a more generalized method based on Bayesian inference. Under mean field approximation SAI reduces to the often-used maximum entropy method (MEM) and for a specific choice of the prior SAI becomes equivalent to SOM. To test the applicability of these two stochastic methods to lattice QCD, firstly, we apply these methods to various reasonably chosen model correlation functions and present detailed comparisons of the reconstructed spectral functions obtained from SOM, SAI and MEM. Next, we present similar studies for charmonia correlation functions obtained from lattice QCD computations using clover-improved Wilson fermions on large, fine, isotropic lattices at 0.75 and 1.5 Tc, Tc being the deconfinement transition temperature of a pure gluon plasma. We find that SAI and SOM give consistent results to MEM at these two temperatures.
Generalized Bondi-Sachs equations for characteristic formalism of numerical relativity
NASA Astrophysics Data System (ADS)
Cao, Zhoujian; He, Xiaokai
2013-11-01
The Cauchy formalism of numerical relativity has been successfully applied to simulate various dynamical spacetimes without any symmetry assumption. But discovering how to set a mathematically consistent and physically realistic boundary condition is still an open problem for Cauchy formalism. In addition, the numerical truncation error and finite region ambiguity affect the accuracy of gravitational wave form calculation. As to the finite region ambiguity issue, the characteristic extraction method helps much. But it does not solve all of the above issues. Besides the above problems for Cauchy formalism, the computational efficiency is another problem. Although characteristic formalism of numerical relativity suffers the difficulty from caustics in the inner near zone, it has advantages in relation to all of the issues listed above. Cauchy-characteristic matching (CCM) is a possible way to take advantage of characteristic formalism regarding these issues and treat the inner caustics at the same time. CCM has difficulty treating the gauge difference between the Cauchy part and the characteristic part. We propose generalized Bondi-Sachs equations for characteristic formalism for the Cauchy-characteristic matching end. Our proposal gives out a possible same numerical evolution scheme for both the Cauchy part and the characteristic part. And our generalized Bondi-Sachs equations have one adjustable gauge freedom which can be used to relate the gauge used in the Cauchy part. Then these equations can make the Cauchy part and the characteristic part share a consistent gauge condition. So our proposal gives a possible new starting point for Cauchy-characteristic matching.
Megone, Christopher; Wilman, Eleanor; Oliver, Sandy; Duley, Lelia; Gyte, Gill; Wright, Judy
2016-09-09
Conducting clinical trials with pre-term or sick infants is important if care for this population is to be underpinned by sound evidence. Yet, approaching the parents of these infants at such a difficult time raises challenges to obtaining valid informed consent for such research. In this study, we asked, What light does the analytical literature cast on an ethically defensible approach to obtaining informed consent in perinatal clinical trials? In a systematic search, we identified 30 studies. We began our analysis by applying philosophical frameworks, which were then refined as concepts emerged from the analytical studies, to present a coherent picture of a broad literature. Between them, the studies addressed four themes. The first three were the ethical basis for parental informed consent for neonatal and/or perinatal research, the validity of parental consent in this context, and the range of possible options in methods for gaining consent. The last was the issue of risk and the possibility of a double-standard or asymmetry in the current approaches to the requirement for consent for research and consent for clinical treatment. In addressing these issues, the analysed studies showed that, whilst there are a variety of possible defences for seeking parental 'consent' to neonatal and/or perinatal clinical trials, these are all consistent with the strongly and widely held view that it is important that parents do give (or decline) consent for such research. So far as the method of obtaining consent is concerned, none of the existing consent processes reviewed by the research is satisfactory, and there are philosophical reasons for supposing that at least some parents will fail to give valid consent in a neonatal context. Furthermore, in giving parental 'consent' in a perinatal context, parents are authorising infant participation, not giving 'proxy consent'. Finally, there are reasons for giving weight to both parental 'consent' and the infant's best interests in both research and clinical treatment. However, there are also reasons to treat these factors differently in the two contexts, and this may be partly due to the differing relevance of risk in each case. A significant gap is the lack of any detailed discussion of a process of emergency and/or urgent 'assent', in which parents assent or refuse their baby's participation as best they can during the emergency and later give full consent to continuing participation and follow-up.
Novel applications of the temporal kernel method: Historical and future radiative forcing
NASA Astrophysics Data System (ADS)
Portmann, R. W.; Larson, E.; Solomon, S.; Murphy, D. M.
2017-12-01
We present a new estimate of the historical radiative forcing derived from the observed global mean surface temperature and a model derived kernel function. Current estimates of historical radiative forcing are usually derived from climate models. Despite large variability in these models, the multi-model mean tends to do a reasonable job of representing the Earth system and climate. One method of diagnosing the transient radiative forcing in these models requires model output of top of the atmosphere radiative imbalance and global mean temperature anomaly. It is difficult to apply this method to historical observations due to the lack of TOA radiative measurements before CERES. We apply the temporal kernel method (TKM) of calculating radiative forcing to the historical global mean temperature anomaly. This novel approach is compared against the current regression based methods using model outputs and shown to produce consistent forcing estimates giving confidence in the forcing derived from the historical temperature record. The derived TKM radiative forcing provides an estimate of the forcing time series that the average climate model needs to produce the observed temperature record. This forcing time series is found to be in good overall agreement with previous estimates but includes significant differences that will be discussed. The historical anthropogenic aerosol forcing is estimated as a residual from the TKM and found to be consistent with earlier moderate forcing estimates. In addition, this method is applied to future temperature projections to estimate the radiative forcing required to achieve those temperature goals, such as those set in the Paris agreement.
Stewardship and accountability.
Osborne, Karen; Osborne, Robert
2005-01-01
In March 2003 The Chronicle for Philanthropy reported a sea change in fund development. After 25 years of consistent research on donor motivation, a study came out with new information. Donors ranked on a scale of 1-10 what mattered most to them when considering giving. Until now, most donors reported that a belief in the mission of the organization and the desire to have an impact on the people served was their number one motivation. Certainly gratitude for life-saving or life-changing service made that belief in mission and impact that much more powerful as a motivation. Improving their community was consistently the number two reason for giving. But, according to the 2003 survey, something changed. Six of the 10 reasons involve stewardship and accountability.
Neural responses to taxation and voluntary giving reveal motives for charitable donations.
Harbaugh, William T; Mayr, Ulrich; Burghart, Daniel R
2007-06-15
Civil societies function because people pay taxes and make charitable contributions to provide public goods. One possible motive for charitable contributions, called "pure altruism," is satisfied by increases in the public good no matter the source or intent. Another possible motive, "warm glow," is only fulfilled by an individual's own voluntary donations. Consistent with pure altruism, we find that even mandatory, tax-like transfers to a charity elicit neural activity in areas linked to reward processing. Moreover, neural responses to the charity's financial gains predict voluntary giving. However, consistent with warm glow, neural activity further increases when people make transfers voluntarily. Both pure altruism and warm-glow motives appear to determine the hedonic consequences of financial transfers to the public good.
Lips, Sebastian; Frontana-Uribe, Bernardo Antonio; Dörr, Maurice; Schollmeyer, Dieter; Franke, Robert; Waldvogel, Siegfried R
2018-04-20
Heterobiaryls consisting of a phenol and a benzofuran motif are of significant importance for pharmaceutical applications. An attractive sustainable, metal- and reagent-free, electrosynthetic, and highly efficient method, that allows access to (2-hydroxyphenyl)benzofurans is presented. Upon the electrochemical dehydrogenative C-C cross-coupling reaction, a metathesis of the benzo moiety at the benzofuran occurs. This gives rise to a substitution pattern at the hydroxyphenyl moiety which would not be compatible by a direct coupling process. The single-step protocol is easy to conduct in an undivided electrolysis cell, therefore scalable, and inherently safe. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Vícha, Jakub; Trávníček, Petr; Nosek, Dalibor; Ebr, Jan
2015-09-01
We consider a hypothetical observatory of ultra-high energy cosmic rays consisting of two surface detector arrays that measure independently electromagnetic and muon signals induced by air showers. Using the constant intensity cut method, sets of events ordered according to each of both signal sizes are compared giving the number of matched events. Based on its dependence on the zenith angle, a parameter sensitive to the dispersion of the distribution of the logarithmic mass of cosmic rays is introduced. The results obtained using two post-LHC models of hadronic interactions are very similar and indicate a weak dependence on details of these interactions.
Structure of rare-earth chalcogenide glasses by neutron and x-ray diffraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drewitt, James W. E.; Salmon, Philip S.; Zeidler, Anita
The method of neutron diffraction with isomorphic substitution was used to measure the structure of the rare-earth chalcogenide glasses (R 2X 3) 0.07(Ga 2X 3) 0.33(GeX 2) 0.60 with R = La or Ce and X = S or Se. X-ray diffraction was also used to measure the structure of the sulphide glass. The results are consistent with networks that are built from GeX 4 and GaX 4 tetrahedra, and give R-S and R-Se coordination numbers of 8.0(2) and 8.5(4), respectively. The minimum nearest-neighbour R-R distance associated with rare-earth clustering is discussed.
Contact Whiskers for Millimeter Wave Diodes
NASA Technical Reports Server (NTRS)
Kerr, A. R.; Grange, J. A.; Lichtenberger, J. A.
1978-01-01
Several techniques are investigated for making short conical tips on wires (whiskers) used for contacting millimeter-wave Schottky diodes. One procedure, using a phosphoric and chromic acid etching solution (PCE), is found to give good results on 12 microns phosphor-bronze wires. Full cone angles of 60 degrees-80 degrees are consistently obtained, compared with the 15 degrees-20 degrees angles obtained with the widely used sodium hydroxide etch. Methods are also described for cleaning, increasing the tip diameter (i.e. blunting), gold plating, and testing the contact resistance of the whiskers. The effects of the whisker tip shape on the electrical resistance, inductance, and capacitance of the whiskers are studied, and examples given for typical sets of parameters.
NASA Astrophysics Data System (ADS)
Loiseau, Sacha; Malbet, Fabien; Yu, Jeffrey W.
1995-06-01
We present a method for performing global astrometry with the proposed Orbiting Stellar Interferometer. Because it is dedicated to wide-angle astrometry, OSI has the intrinsic capabilities to achieve global astrometry, even though it doesn't measure directly relative angles between pairs of stars, such as HIPPARCOS. In this paper, a time-independent model is shown, leading to a coherent solution for the positions of reference stars on the whole sky. With an initial measurement accuracy of 10 micro-arcseconds, corresponding to an accuracy of 340 picometers in the knowledge of the delay-line position of the observing interferometer, the consistent least-squares solution gives an accuracy by which the astrometric parameters can be obtained around 2 - 3 micro-arcseconds.
Structure of rare-earth chalcogenide glasses by neutron and x-ray diffraction
Drewitt, James W. E.; Salmon, Philip S.; Zeidler, Anita; ...
2017-04-28
The method of neutron diffraction with isomorphic substitution was used to measure the structure of the rare-earth chalcogenide glasses (R 2X 3) 0.07(Ga 2X 3) 0.33(GeX 2) 0.60 with R = La or Ce and X = S or Se. X-ray diffraction was also used to measure the structure of the sulphide glass. The results are consistent with networks that are built from GeX 4 and GaX 4 tetrahedra, and give R-S and R-Se coordination numbers of 8.0(2) and 8.5(4), respectively. The minimum nearest-neighbour R-R distance associated with rare-earth clustering is discussed.
Detection of pure inverse spin-Hall effect induced by spin pumping at various excitation
NASA Astrophysics Data System (ADS)
Inoue, H. Y.; Harii, K.; Ando, K.; Sasage, K.; Saitoh, E.
2007-10-01
Electric-field generation due to the inverse spin-Hall effect (ISHE) driven by spin pumping was detected and separated experimentally from the extrinsic magnetogalvanic effects in a Ni81Fe19/Pt film. By applying a sample-cavity configuration in which the extrinsic effects are suppressed, the spin pumping using ferromagnetic resonance gives rise to a symmetric spectral shape in the electromotive force spectrum, indicating that the motive force is due entirely to ISHE. This method allows the quantitative analysis of the ISHE and the spin-pumping effect. The microwave-power dependence of the ISHE amplitude is consistent with the prediction of a direct current-spin-pumping scenario.
Abuali, M M; Katariwala, R; LaBombardi, V J
2012-05-01
The agar proportion method (APM) for determining Mycobacterium tuberculosis susceptibilities is a qualitative method that requires 21 days in order to produce the results. The Sensititre method allows for a quantitative assessment. Our objective was to compare the accuracy, time to results, and ease of use of the Sensititre method to the APM. 7H10 plates in the APM and 96-well microtiter dry MYCOTB panels containing 12 antibiotics at full dilution ranges in the Sensititre method were inoculated with M. tuberculosis and read for colony growth. Thirty-seven clinical isolates were tested using both methods and 26 challenge strains of blinded susceptibilities were tested using the Sensititre method only. The Sensititre method displayed 99.3% concordance with the APM. The APM provided reliable results on day 21, whereas the Sensititre method displayed consistent results by day 10. The Sensititre method provides a more rapid, quantitative, and efficient method of testing both first- and second-line drugs when compared to the gold standard. It will give clinicians a sense of the degree of susceptibility, thus, guiding the therapeutic decision-making process. Furthermore, the microwell plate format without the need for instrumentation will allow its use in resource-poor settings.
Project-Based Learning in Programmable Logic Controller
NASA Astrophysics Data System (ADS)
Seke, F. R.; Sumilat, J. M.; Kembuan, D. R. E.; Kewas, J. C.; Muchtar, H.; Ibrahim, N.
2018-02-01
Project-based learning is a learning method that uses project activities as the core of learning and requires student creativity in completing the project. The aims of this study is to investigate the influence of project-based learning methods on students with a high level of creativity in learning the Programmable Logic Controller (PLC). This study used experimental methods with experimental class and control class consisting of 24 students, with 12 students of high creativity and 12 students of low creativity. The application of project-based learning methods into the PLC courses combined with the level of student creativity enables the students to be directly involved in the work of the PLC project which gives them experience in utilizing PLCs for the benefit of the industry. Therefore, it’s concluded that project-based learning method is one of the superior learning methods to apply on highly creative students to PLC courses. This method can be used as an effort to improve student learning outcomes and student creativity as well as to educate prospective teachers to become reliable educators in theory and practice which will be tasked to create qualified human resources candidates in order to meet future industry needs.
Image recognition and consistency of response
NASA Astrophysics Data System (ADS)
Haygood, Tamara M.; Ryan, John; Liu, Qing Mary A.; Bassett, Roland; Brennan, Patrick C.
2012-02-01
Purpose: To investigate the connection between conscious recognition of an image previously encountered in an experimental setting and consistency of response to the experimental question.
Materials and Methods: Twenty-four radiologists viewed 40 frontal chest radiographs and gave their opinion as to the position of a central venous catheter. One-to-three days later they again viewed 40 frontal chest radiographs and again gave their opinion as to the position of the central venous catheter. Half of the radiographs in the second set were repeated images from the first set and half were new. The radiologists were asked of each image whether it had been included in the first set. For this study, we are evaluating only the 20 repeated images. We used the Kruskal-Wallis test and Fisher's exact test to determine the relationship between conscious recognition of a previously interpreted image and consistency in interpretation of the image.
Results. There was no significant correlation between recognition of the image and consistency in response regarding the position of the central venous catheter. In fact, there was a trend in the opposite direction, with radiologists being slightly more likely to give a consistent response with respect to images they did not recognize than with respect to those they did recognize.
Conclusion: Radiologists' recognition of previously-encountered images in an observer-performance study does not noticeably color their interpretation on the second encounter.
In situ electronic probing of semiconducting nanowires in an electron microscope.
Fauske, V T; Erlbeck, M B; Huh, J; Kim, D C; Munshi, A M; Dheeraj, D L; Weman, H; Fimland, B O; Van Helvoort, A T J
2016-05-01
For the development of electronic nanoscale structures, feedback on its electronic properties is crucial, but challenging. Here, we present a comparison of various in situ methods for electronically probing single, p-doped GaAs nanowires inside a scanning electron microscope. The methods used include (i) directly probing individual as-grown nanowires with a sharp nano-manipulator, (ii) contacting dispersed nanowires with two metal contacts and (iii) contacting dispersed nanowires with four metal contacts. For the last two cases, we compare the results obtained using conventional ex situ litho-graphy contacting techniques and by in situ, direct-write electron beam induced deposition of a metal (Pt). The comparison shows that 2-probe measurements gives consistent results also with contacts made by electron beam induced deposition, but that for 4-probe, stray deposition can be a problem for shorter nanowires. This comparative study demonstrates that the preferred in situ method depends on the required throughput and reliability. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Three-cluster dynamics within an ab initio framework
Quaglioni, Sofia; Romero-Redondo, Carolina; Navratil, Petr
2013-09-26
In this study, we introduce a fully antisymmetrized treatment of three-cluster dynamics within the ab initio framework of the no-core shell model/resonating-group method. Energy-independent nonlocal interactions among the three nuclear fragments are obtained from realistic nucleon-nucleon interactions and consistent ab initio many-body wave functions of the clusters. The three-cluster Schrödinger equation is solved with bound-state boundary conditions by means of the hyperspherical-harmonic method on a Lagrange mesh. We discuss the formalism in detail and give algebraic expressions for systems of two single nucleons plus a nucleus. Using a soft similarity-renormalization-group evolved chiral nucleon-nucleon potential, we apply the method to amore » 4He+n+n description of 6He and compare the results to experiment and to a six-body diagonalization of the Hamiltonian performed within the harmonic-oscillator expansions of the no-core shell model. Differences between the two calculations provide a measure of core ( 4He) polarization effects.« less
NASA Astrophysics Data System (ADS)
Liu, Jianfei; Wei, Ying; Zeng, Xiangye; Lu, Jia; Zhang, Shuangxi; Wang, Mengjun
2018-03-01
A joint timing and frequency synchronization method has been proposed for coherent optical orthogonal frequency-division multiplexing (CO-OFDM) system in this paper. The timing offset (TO), integer frequency offset (FO) and the fractional FO can be realized by only one training symbol, which consists of two linear frequency modulation (LFM) signals with opposite chirp rates. By detecting the peak of LFM signals after Radon-Wigner transform (RWT), the TO and the integer FO can be estimated at the same time, moreover, the fractional FO can be acquired correspondingly through the self-correlation characteristic of the same training symbol. Simulation results show that the proposed method can give a more accurate TO estimation than the existing methods, especially at poor OSNR conditions; for the FO estimation, both the fractional and the integer FO can be estimated through the proposed training symbol with no extra overhead, a more accurate estimation and a large FO estimation range of [ - 5 GHz, 5GHz] can be acquired.
A solution to neural field equations by a recurrent neural network method
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2012-09-01
Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.
Time-resolved flowmetering of gas-liquid two-phase pipe flow by ultrasound pulse Doppler method
NASA Astrophysics Data System (ADS)
Murai, Yuichi; Tasaka, Yuji; Takeda, Yasushi
2012-03-01
Ultrasound pulse Doppler method is applied for componential volumetric flow rate measurement in multiphase pipe flow consisted of gas and liquid phases. The flowmetering is realized with integration of measured velocity profile over the cross section of the pipe within liquid phase. Spatio-temporal position of interface is detected also with the same ultrasound pulse, which further gives cross sectional void fraction. A series of experimental demonstration was shown by applying this principle of measurement to air-water two-phase flow in a horizontal tube of 40 mm in diameter, of which void fraction ranges from 0 to 90% at superficial velocity from 0 to 15 m/s. The measurement accuracy is verified with a volumetric type flowmeter. We also analyze the accuracy of area integration of liquid velocity distribution for many different patterns of ultrasound measurement lines assigned on the cross section of the tube. The present method is also identified to be pulsation sensor of flow rate that fluctuates with complex gas-liquid interface behavior.
A Comparison of Several Methods of Measuring Ignition Lag in a Compression-ignition Engine
NASA Technical Reports Server (NTRS)
Spanogle, J A
1934-01-01
The ignition lag of a fuel oil in the combustion chamber of a high speed compression-ignition engine was measured by three different methods. The start of injection of the fuel as observed with a Stoborama was taken as the start of the period of ignition lag in all cases. The end of the period of ignition lag was determined by observation of the appearance of incandescence in the combustion chamber, by inspection of a pressure-time card for evidence of pressure rise, and by analysis of the indicator card for evidence of the combustion of a small but definite quantity of fuel. A comparison of the values for ignition lags obtained by these three methods indicates that the appearance of incandescence is later than other evidences of the start of combustion, that visual inspection of a pressure-time diagram gives consistent and usable values with a minimum requirement of time and/or apparatus, and that analysis of the indicator card is not worth while for ignition lag alone.
Day, J G; Benson, E E; Harding, K; Knowles, B; Idowu, M; Bremner, D; Santos, L; Santos, F; Friedl, T; Lorenz, M; Lukesova, A; Elster, J; Lukavsky, J; Herdman, M; Rippka, R; Hall, T
2005-01-01
Microalgae are one of the most biologically important elements of worldwide ecology and could be the source of diverse new products and medicines. COBRA (The COnservation of a vital european scientific and Biotechnological Resource: microAlgae and cyanobacteria) is the acronym for a European Union, RTD Infrastructures project (Contract No. QLRI-CT-2001-01645). This project is in the process of developing a European Biological Resource Centre based on existing algal culture collections. The COBRA project's central aim is to apply cryopreservation methodologies to microalgae and cyanobacteria, organisms that, to date, have proved difficult to conserve using cryogenic methods. In addition, molecular and biochemical stability tests have been developed to ensure that the equivalent strains of microorganisms supplied by the culture collections give high quality and consistent performance. Fundamental and applied knowledge of stress physiology form an essential component of the project and this is being employed to assist the optimisation of methods for preserving a wide range of algal diversity. COBRA's "Resource Centre" utilises Information Technologies (IT) and Knowledge Management practices to assist project coordination, management and information dissemination and facilitate the generation of new knowledge pertaining to algal conservation. This review of the COBRA project will give a summary of current methodologies for cryopreservation of microalgae and procedures adopted within the COBRA project to enhance preservation techniques for this diverse group of organisms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, W.
2012-07-01
Recent assessment results indicate that the coarse-mesh finite-difference method (FDM) gives consistently smaller percent differences in channel powers than the fine-mesh FDM when compared to the reference MCNP solution for CANDU-type reactors. However, there is an impression that the fine-mesh FDM should always give more accurate results than the coarse-mesh FDM in theory. To answer the question if the better performance of the coarse-mesh FDM for CANDU-type reactors was just a coincidence (cancellation of errors) or caused by the use of heavy water or the use of lattice-homogenized cross sections for the cluster fuel geometry in the diffusion calculation, threemore » benchmark problems were set up with three different fuel lattices: CANDU, HWR and PWR. These benchmark problems were then used to analyze the root cause of the better performance of the coarse-mesh FDM for CANDU-type reactors. The analyses confirm that the better performance of the coarse-mesh FDM for CANDU-type reactors is mainly caused by the use of lattice-homogenized cross sections for the sub-meshes of the cluster fuel geometry in the diffusion calculation. Based on the analyses, it is recommended to use 2 x 2 coarse-mesh FDM to analyze CANDU-type reactors when lattice-homogenized cross sections are used in the core analysis. (authors)« less
Microwave Analysis with Monte Carlo Methods for ECH Transmission Lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufman, Michael C.; Lau, Cornwall H.; Hanson, Gregory R.
A new code framework, MORAMC, is presented which model transmission line (TL) systems consisting of overmoded circular waveguide and other components including miter bends and transmission line gaps. The transmission line is modeled as a set of mode converters in series where each component is composed of one or more converters. The parametrization of each mode converter can account for the fabrication tolerances of physically realizable components. These tolerances as well as the precision to which these TL systems can be installed and aligned gives a practical limit to which the uncertainty of the microwave performance of the system canmore » be calculated. Because of this, Monte Carlo methods are a natural fit and are employed to calculate the probability distribution that a given TL can deliver a required power and mode purity. Several examples are given to demonstrate the usefulness of MORAMC in optimizing TL systems.« less
Microwave Analysis with Monte Carlo Methods for ECH Transmission Lines
Kaufman, Michael C.; Lau, Cornwall H.; Hanson, Gregory R.
2018-03-08
A new code framework, MORAMC, is presented which model transmission line (TL) systems consisting of overmoded circular waveguide and other components including miter bends and transmission line gaps. The transmission line is modeled as a set of mode converters in series where each component is composed of one or more converters. The parametrization of each mode converter can account for the fabrication tolerances of physically realizable components. These tolerances as well as the precision to which these TL systems can be installed and aligned gives a practical limit to which the uncertainty of the microwave performance of the system canmore » be calculated. Because of this, Monte Carlo methods are a natural fit and are employed to calculate the probability distribution that a given TL can deliver a required power and mode purity. Several examples are given to demonstrate the usefulness of MORAMC in optimizing TL systems.« less
NASA Astrophysics Data System (ADS)
Fachrurrozi, Muhammad; Saparudin; Erwin
2017-04-01
Real-time Monitoring and early detection system which measures the quality standard of waste in Musi River, Palembang, Indonesia is a system for determining air and water pollution level. This system was designed in order to create an integrated monitoring system and provide real time information that can be read. It is designed to measure acidity and water turbidity polluted by industrial waste, as well as to show and provide conditional data integrated in one system. This system consists of inputting and processing the data, and giving output based on processed data. Turbidity, substances, and pH sensor is used as a detector that produce analog electrical direct current voltage (DC). Early detection system works by determining the value of the ammonia threshold, acidity, and turbidity level of water in Musi River. The results is then presented based on the level group pollution by the Support Vector Machine classification method.
Signal classification using global dynamical models, Part II: SONAR data analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kremliovsky, M.; Kadtke, J.
1996-06-01
In Part I of this paper, we described a numerical method for nonlinear signal detection and classification which made use of techniques borrowed from dynamical systems theory. Here in Part II of the paper, we will describe an example of data analysis using this method, for data consisting of open ocean acoustic (SONAR) recordings of marine mammal transients, supplied from NUWC sources. The purpose here is two-fold: first to give a more operational description of the technique and provide rules-of-thumb for parameter choices; and second to discuss some new issues raised by the analysis of non-ideal (real-world) data sets. Themore » particular data set considered here is quite non-stationary, relatively noisy, is not clearly localized in the background, and as such provides a difficult challenge for most detection/classification schemes. {copyright} {ital 1996 American Institute of Physics.}« less
New method to evaluate the 7Li(p, n)7Be reaction near threshold
NASA Astrophysics Data System (ADS)
Herrera, María S.; Moreno, Gustavo A.; Kreiner, Andrés J.
2015-04-01
In this work a complete description of the 7Li(p, n)7Be reaction near threshold is given using center-of-mass and relative coordinates. It is shown that this standard approach, not used before in this context, leads to a simple mathematical representation which gives easy access to all relevant quantities in the reaction and allows a precise numerical implementation. It also allows in a simple way to include proton beam-energy spread affects. The method, implemented as a C++ code, was validated both with numerical and experimental data finding a good agreement. This tool is also used here to analyze scattered published measurements such as (p, n) cross sections, differential and total neutron yields for thick targets. Using these data we derive a consistent set of parameters to evaluate neutron production near threshold. Sensitivity of the results to data uncertainty and the possibility of incorporating new measurements are also discussed.
Eigenspace-based fuzzy c-means for sensing trending topics in Twitter
NASA Astrophysics Data System (ADS)
Muliawati, T.; Murfi, H.
2017-07-01
As the information and communication technology are developed, the fulfillment of information can be obtained through social media, like Twitter. The enormous number of internet users has triggered fast and large data flow, thus making the manual analysis is difficult or even impossible. An automated methods for data analysis is needed, one of which is the topic detection and tracking. An alternative method other than latent Dirichlet allocation (LDA) is a soft clustering approach using Fuzzy C-Means (FCM). FCM meets the assumption that a document may consist of several topics. However, FCM works well in low-dimensional data but fails in high-dimensional data. Therefore, we propose an approach where FCM works on low-dimensional data by reducing the data using singular value decomposition (SVD). Our simulations show that this approach gives better accuracies in term of topic recall than LDA for sensing trending topic in Twitter about an event.
Microwave Analysis with Monte Carlo Methods for ECH Transmission Lines
NASA Astrophysics Data System (ADS)
Kaufman, M. C.; Lau, C.; Hanson, G. R.
2018-03-01
A new code framework, MORAMC, is presented which model transmission line (TL) systems consisting of overmoded circular waveguide and other components including miter bends and transmission line gaps. The transmission line is modeled as a set of mode converters in series where each component is composed of one or more converters. The parametrization of each mode converter can account for the fabrication tolerances of physically realizable components. These tolerances as well as the precision to which these TL systems can be installed and aligned gives a practical limit to which the uncertainty of the microwave performance of the system can be calculated. Because of this, Monte Carlo methods are a natural fit and are employed to calculate the probability distribution that a given TL can deliver a required power and mode purity. Several examples are given to demonstrate the usefulness of MORAMC in optimizing TL systems.
NASA Technical Reports Server (NTRS)
Canfield, R. C.; Ricchiazzi, P. J.
1980-01-01
An approximate probabilistic radiative transfer equation and the statistical equilibrium equations are simultaneously solved for a model hydrogen atom consisting of three bound levels and ionization continuum. The transfer equation for L-alpha, L-beta, H-alpha, and the Lyman continuum is explicitly solved assuming complete redistribution. The accuracy of this approach is tested by comparing source functions and radiative loss rates to values obtained with a method that solves the exact transfer equation. Two recent model solar-flare chromospheres are used for this test. It is shown that for the test atmospheres the probabilistic method gives values of the radiative loss rate that are characteristically good to a factor of 2. The advantage of this probabilistic approach is that it retains a description of the dominant physical processes of radiative transfer in the complete redistribution case, yet it achieves a major reduction in computational requirements.
Computation of the dipole moments of proteins.
Antosiewicz, J
1995-10-01
A simple and computationally feasible procedure for the calculation of net charges and dipole moments of proteins at arbitrary pH and salt conditions is described. The method is intended to provide data that may be compared to the results of transient electric dichroism experiments on protein solutions. The procedure consists of three major steps: (i) calculation of self energies and interaction energies for ionizable groups in the protein by using the finite-difference Poisson-Boltzmann method, (ii) determination of the position of the center of diffusion (to which the calculated dipole moment refers) and the extinction coefficient tensor for the protein, and (iii) generation of the equilibrium distribution of protonation states of the protein by a Monte Carlo procedure, from which mean and root-mean-square dipole moments and optical anisotropies are calculated. The procedure is applied to 12 proteins. It is shown that it gives hydrodynamic and electrical parameters for proteins in good agreement with experimental data.
Aspects of warped AdS3/CFT2 correspondence
NASA Astrophysics Data System (ADS)
Chen, Bin; Zhang, Jia-Ju; Zhang, Jian-Dong; Zhong, De-Liang
2013-04-01
In this paper we apply the thermodynamics method to investigate the holographic pictures for the BTZ black hole, the spacelike and the null warped black holes in three-dimensional topologically massive gravity (TMG) and new massive gravity (NMG). Even though there are higher derivative terms in these theories, the thermodynamics method is still effective. It gives consistent results with the ones obtained by using asymptotical symmetry group (ASG) analysis. In doing the ASG analysis we develop a brute-force realization of the Barnich-Brandt-Compere formalism with Mathematica code, which also allows us to calculate the masses and the angular momenta of the black holes. In particular, we propose the warped AdS3/CFT2 correspondence in the new massive gravity, which states that quantum gravity in the warped spacetime could holographically dual to a two-dimensional CFT with {c_R}={c_L}=24 /{Gm{β^2√{{2( {21-4{β^2}} )}}}}.
Stretch or contraction induced inversion of rectification in diblock molecular junctions
NASA Astrophysics Data System (ADS)
Zhang, Guang-Ping; Hu, Gui-Chao; Song, Yang; Xie, Zhen; Wang, Chuan-Kui
2013-09-01
Based on ab initio theory and nonequilibrium Green's function method, the effect of stretch or contraction on the rectification in diblock co-oligomer molecular diodes is investigated theoretically. Interestingly, an inversion of rectifying direction induced by stretching or contracting the molecular junctions, which is closely related to the number of the pyrimidinyl-phenyl units, is proposed. The analysis of the molecular projected self-consistent Hamiltonian and the evolution of the frontier molecular orbitals as well as transmission coefficients under external biases gives an inside view of the observed results. It reveals that the asymmetric molecular level shift and asymmetric evolution of orbital wave functions under biases are competitive mechanisms for rectification. The stretching or contracting induced inversion of the rectification is due to the conversion of the dominant mechanism. This work suggests a feasible technique to manipulate the rectification performance in molecular diodes by use of the mechanically controllable method.
NASA Astrophysics Data System (ADS)
Giraudeau, A.; Pierron, F.
2010-06-01
The paper presents an experimental application of a method leading to the identification of the elastic and damping material properties of isotropic vibrating plates. The theory assumes that the searched parameters can be extracted from curvature and deflection fields measured on the whole surface of the plate at two particular instants of the vibrating motion. The experimental application consists in an original excitation fixture, a particular adaptation of an optical full-field measurement technique, a data preprocessing giving the curvature and deflection fields and finally in the identification process using the Virtual Fields Method (VFM). The principle of the deflectometry technique used for the measurements is presented. First results of identification on an acrylic plate are presented and compared to reference values. Details about a new experimental arrangement, currently in progress, is presented. It uses a high speed digital camera to over sample the full-field measurements.
Synthesis and luminescence properties of KSrPO4:Eu2+ phosphor for radiation dosimetry
NASA Astrophysics Data System (ADS)
Palan, C. B.; Bajaj, N. S.; Omanwar, S. K.
2016-05-01
The KSrPO4:Eu phosphor was synthesized via solid state method. The structural and morphological characterizations were done through XRD (X-ray diffraction) and SEM (Scanning Electronic Microscope). Additionally, the photoluminescence (PL), thermoluminescence (TL) and optically Stimulated luminescence (OSL) properties of powder KSrPO4:Eu were studied. The PL spectra show blue emission under near UV excitation. It was advocated that KSrPO4:Eu phosphor not only show OSL sensitivity (0.47 times) but also gives faster decay in OSL signals than that of Al2O3:C (BARC) phosphor. The TL glow curve consist of two shoulder peaks and the kinetics parameters such as activation energy and frequency factors were determined by using peak shape method and also photoionization cross-sections of prepared phosphor was calculated. The radiation dosimetry properties such as minimum detectable dose (MDD), dose response and reusability were reported.
Identification of a parametric, discrete-time model of ankle stiffness.
Guarin, Diego L; Jalaleddini, Kian; Kearney, Robert E
2013-01-01
Dynamic ankle joint stiffness defines the relationship between the position of the ankle and the torque acting about it and can be separated into intrinsic and reflex components. Under stationary conditions, intrinsic stiffness can described by a linear second order system while reflex stiffness is described by Hammerstein system whose input is delayed velocity. Given that reflex and intrinsic torque cannot be measured separately, there has been much interest in the development of system identification techniques to separate them analytically. To date, most methods have been nonparametric and as a result there is no direct link between the estimated parameters and those of the stiffness model. This paper presents a novel algorithm for identification of a discrete-time model of ankle stiffness. Through simulations we show that the algorithm gives unbiased results even in the presence of large, non-white noise. Application of the method to experimental data demonstrates that it produces results consistent with previous findings.
The analytical solution for drug delivery system with nonhomogeneous moving boundary condition
NASA Astrophysics Data System (ADS)
Saudi, Muhamad Hakimi; Mahali, Shalela Mohd; Harun, Fatimah Noor
2017-08-01
This paper discusses the development and the analytical solution of a mathematical model based on drug release system from a swelling delivery device. The mathematical model is represented by a one-dimensional advection-diffusion equation with nonhomogeneous moving boundary condition. The solution procedures consist of three major steps. Firstly, the application of steady state solution method, which is used to transform the nonhomogeneous moving boundary condition to homogeneous boundary condition. Secondly, the application of the Landau transformation technique that gives a significant impact in removing the advection term in the system of equation and transforming the moving boundary condition to a fixed boundary condition. Thirdly, the used of separation of variables method to find the analytical solution for the resulted initial boundary value problem. The results show that the swelling rate of delivery device and drug release rate is influenced by value of growth factor r.
Matching multiple rigid domain decompositions of proteins
Flynn, Emily; Streinu, Ileana
2017-01-01
We describe efficient methods for consistently coloring and visualizing collections of rigid cluster decompositions obtained from variations of a protein structure, and lay the foundation for more complex setups that may involve different computational and experimental methods. The focus here is on three biological applications: the conceptually simpler problems of visualizing results of dilution and mutation analyses, and the more complex task of matching decompositions of multiple NMR models of the same protein. Implemented into the KINARI web server application, the improved visualization techniques give useful information about protein folding cores, help examining the effect of mutations on protein flexibility and function, and provide insights into the structural motions of PDB proteins solved with solution NMR. These tools have been developed with the goal of improving and validating rigidity analysis as a credible coarse-grained model capturing essential information about a protein’s slow motions near the native state. PMID:28141528
Characterization of YBa2Cu3O7, including critical current density Jc, by trapped magnetic field
NASA Technical Reports Server (NTRS)
Chen, In-Gann; Liu, Jianxiong; Weinstein, Roy; Lau, Kwong
1992-01-01
Spatial distributions of persistent magnetic field trapped by sintered and melt-textured ceramic-type high-temperature superconductor (HTS) samples have been studied. The trapped field can be reproduced by a model of the current consisting of two components: (1) a surface current Js and (2) a uniform volume current Jv. This Js + Jv model gives a satisfactory account of the spatial distribution of the magnetic field trapped by different types of HTS samples. The magnetic moment can be calculated, based on the Js + Jv model, and the result agrees well with that measured by standard vibrating sample magnetometer (VSM). As a consequence, Jc predicted by VSM methods agrees with Jc predicted from the Js + Jv model. The field mapping method described is also useful to reveal the granular structure of large HTS samples and regions of weak links.
Utility of correlation techniques in gravity and magnetic interpretation
NASA Technical Reports Server (NTRS)
Chandler, V. W.; Koski, J. S.; Braice, L. W.; Hinze, W. J.
1977-01-01
Internal correspondence uses Poisson's Theorem in a moving-window linear regression analysis between the anomalous first vertical derivative of gravity and total magnetic field reduced to the pole. The regression parameters provide critical information on source characteristics. The correlation coefficient indicates the strength of the relation between magnetics and gravity. Slope value gives delta j/delta sigma estimates of the anomalous source. The intercept furnishes information on anomaly interference. Cluster analysis consists of the classification of subsets of data into groups of similarity based on correlation of selected characteristics of the anomalies. Model studies are used to illustrate implementation and interpretation procedures of these methods, particularly internal correspondence. Analysis of the results of applying these methods to data from the midcontinent and a transcontinental profile shows they can be useful in identifying crustal provinces, providing information on horizontal and vertical variations of physical properties over province size zones, validating long wavelength anomalies, and isolating geomagnetic field removal problems.
Algorithms and methodology used in constructing high-resolution terrain databases
NASA Astrophysics Data System (ADS)
Williams, Bryan L.; Wilkosz, Aaron
1998-07-01
This paper presents a top-level description of methods used to generate high-resolution 3D IR digital terrain databases using soft photogrammetry. The 3D IR database is derived from aerial photography and is made up of digital ground plane elevation map, vegetation height elevation map, material classification map, object data (tanks, buildings, etc.), and temperature radiance map. Steps required to generate some of these elements are outlined. The use of metric photogrammetry is discussed in the context of elevation map development; and methods employed to generate the material classification maps are given. The developed databases are used by the US Army Aviation and Missile Command to evaluate the performance of various missile systems. A discussion is also presented on database certification which consists of validation, verification, and accreditation procedures followed to certify that the developed databases give a true representation of the area of interest, and are fully compatible with the targeted digital simulators.
Efficient high-quality volume rendering of SPH data.
Fraedrich, Roland; Auer, Stefan; Westermann, Rüdiger
2010-01-01
High quality volume rendering of SPH data requires a complex order-dependent resampling of particle quantities along the view rays. In this paper we present an efficient approach to perform this task using a novel view-space discretization of the simulation domain. Our method draws upon recent work on GPU-based particle voxelization for the efficient resampling of particles into uniform grids. We propose a new technique that leverages a perspective grid to adaptively discretize the view-volume, giving rise to a continuous level-of-detail sampling structure and reducing memory requirements compared to a uniform grid. In combination with a level-of-detail representation of the particle set, the perspective grid allows effectively reducing the amount of primitives to be processed at run-time. We demonstrate the quality and performance of our method for the rendering of fluid and gas dynamics SPH simulations consisting of many millions of particles.
Research productivity in select psychology journals, 1986-2008.
Mahoney, Kevin T; Buboltz, Walter C; Calvert, Barbara; Hoffmann, Rebecca
2010-01-01
Examination of research productivity has a long history in psychology. Journals across psychology have periodically published research-productivity studies. An analysis of institutional research productivity was conducted for 17 journals published by the American Psychological Association for the years 1986-2008. This analysis implemented two methodologies: one a replication and extension of G. S. Howard, D. A. Cole, and S. E. Maxwell's (1987) method, the other a new method designed to give credit to psychology departments rather than only overall institutions. A system of proportional credit assured all articles with multiple institutions received credit. Results show that for the 23-year period, the University of Illinois at Urbana-Champaign was ranked 1st, followed by the University of California, Los Angeles, and the University of Michigan, Ann Arbor. Overall, results showed both consistency and change across all journals examined. The authors explore the implications of these findings in the context of the current academic environment.
Simulation methods supporting homologation of Electronic Stability Control in vehicle variants
NASA Astrophysics Data System (ADS)
Lutz, Albert; Schick, Bernhard; Holzmann, Henning; Kochem, Michael; Meyer-Tuve, Harald; Lange, Olav; Mao, Yiqin; Tosolin, Guido
2017-10-01
Vehicle simulation has a long tradition in the automotive industry as a powerful supplement to physical vehicle testing. In the field of Electronic Stability Control (ESC) system, the simulation process has been well established to support the ESC development and application by suppliers and Original Equipment Manufacturers (OEMs). The latest regulation of the United Nations Economic Commission for Europe UN/ECE-R 13 allows also for simulation-based homologation. This extends the usage of simulation from ESC development to homologation. This paper gives an overview of simulation methods, as well as processes and tools used for the homologation of ESC in vehicle variants. The paper first describes the generic homologation process according to the European Regulation (UN/ECE-R 13H, UN/ECE-R 13/11) and U.S. Federal Motor Vehicle Safety Standard (FMVSS 126). Subsequently the ESC system is explained as well as the generic application and release process at the supplier and OEM side. Coming up with the simulation methods, the ESC development and application process needs to be adapted for the virtual vehicles. The simulation environment, consisting of vehicle model, ESC model and simulation platform, is explained in detail with some exemplary use-cases. In the final section, examples of simulation-based ESC homologation in vehicle variants are shown for passenger cars, light trucks, heavy trucks and trailers. This paper is targeted to give a state-of-the-art account of the simulation methods supporting the homologation of ESC systems in vehicle variants. However, the described approach and the lessons learned can be used as reference in future for an extended usage of simulation-supported releases of the ESC system up to the development and release of driver assistance systems.
Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect
NASA Astrophysics Data System (ADS)
Artyukhin, S. G.; Mestetskiy, L. M.
2015-05-01
This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.
Crystallization mosaic effect generation by superpixels
NASA Astrophysics Data System (ADS)
Xie, Yuqi; Bo, Pengbo; Yuan, Ye; Wang, Kuanquan
2015-03-01
Art effect generation from digital images using computational tools has been a hot research topic in recent years. We propose a new method for generating crystallization mosaic effects from color images. Two key problems in generating pleasant mosaic effect are studied: grouping pixels into mosaic tiles and arrangement of mosaic tiles adapting to image features. To give visually pleasant mosaic effect, we propose to create mosaic tiles by pixel clustering in feature space of color information, taking compactness of tiles into consideration as well. Moreover, we propose a method for processing feature boundaries in images which gives guidance for arranging mosaic tiles near image features. This method gives nearly uniform shape of mosaic tiles, adapting to feature lines in an esthetic way. The new approach considers both color distance and Euclidean distance of pixels, and thus is capable of giving mosaic tiles in a more pleasing manner. Some experiments are included to demonstrate the computational efficiency of the present method and its capability of generating visually pleasant mosaic tiles. Comparisons with existing approaches are also included to show the superiority of the new method.
Combined Use of Integral Experiments and Covariance Data
NASA Astrophysics Data System (ADS)
Palmiotti, G.; Salvatores, M.; Aliberti, G.; Herman, M.; Hoblit, S. D.; McKnight, R. D.; Obložinský, P.; Talou, P.; Hale, G. M.; Hiruta, H.; Kawano, T.; Mattoon, C. M.; Nobre, G. P. A.; Palumbo, A.; Pigni, M.; Rising, M. E.; Yang, W.-S.; Kahler, A. C.
2014-04-01
In the frame of a US-DOE sponsored project, ANL, BNL, INL and LANL have performed a joint multidisciplinary research activity in order to explore the combined use of integral experiments and covariance data with the objective to both give quantitative indications on possible improvements of the ENDF evaluated data files and to reduce at the same time crucial reactor design parameter uncertainties. Methods that have been developed in the last four decades for the purposes indicated above have been improved by some new developments that benefited also by continuous exchanges with international groups working in similar areas. The major new developments that allowed significant progress are to be found in several specific domains: a) new science-based covariance data; b) integral experiment covariance data assessment and improved experiment analysis, e.g., of sample irradiation experiments; c) sensitivity analysis, where several improvements were necessary despite the generally good understanding of these techniques, e.g., to account for fission spectrum sensitivity; d) a critical approach to the analysis of statistical adjustments performance, both a priori and a posteriori; e) generalization of the assimilation method, now applied for the first time not only to multigroup cross sections data but also to nuclear model parameters (the "consistent" method). This article describes the major results obtained in each of these areas; a large scale nuclear data adjustment, based on the use of approximately one hundred high-accuracy integral experiments, will be reported along with a significant example of the application of the new "consistent" method of data assimilation.
Disentangling Global Warming, Multidecadal Variability, and El Niño in Pacific Temperatures
NASA Astrophysics Data System (ADS)
Wills, Robert C.; Schneider, Tapio; Wallace, John M.; Battisti, David S.; Hartmann, Dennis L.
2018-03-01
A key challenge in climate science is to separate observed temperature changes into components due to internal variability and responses to external forcing. Extended integrations of forced and unforced climate models are often used for this purpose. Here we demonstrate a novel method to separate modes of internal variability from global warming based on differences in time scale and spatial pattern, without relying on climate models. We identify uncorrelated components of Pacific sea surface temperature variability due to global warming, the Pacific Decadal Oscillation (PDO), and the El Niño-Southern Oscillation (ENSO). Our results give statistical representations of PDO and ENSO that are consistent with their being separate processes, operating on different time scales, but are otherwise consistent with canonical definitions. We isolate the multidecadal variability of the PDO and find that it is confined to midlatitudes; tropical sea surface temperatures and their teleconnections mix in higher-frequency variability. This implies that midlatitude PDO anomalies are more persistent than previously thought.
NASA Technical Reports Server (NTRS)
Schmidlin, F. J.; Thompson, A. M.; Holdren, D. H.; Northam, E. T.; Witte, J. C.; Oltmans, S. J.; Hoegger, B.; Levrat, G. M.; Kirchhoff, V.
2000-01-01
Vertical ozone profiles between the Equator and 10 S latitude available from the Southern Hemisphere Additional Ozone (SHADOZ) program provide consistent data Ozone sets from up to 10 sounding locations. SHADOZ designed to provide independent ozone profiles in the tropics for evaluation of satellite ozone data and models has made available over 600 soundings over the period 1998-1999. These observations provide an ideal data base for the detailed description of ozone and afford differential comparison between sites. TOMS total ozone when compared with correlative integrated total ozone overburden from the sondes is found to be negatively biased when using the classical constant mixing ratio procedure to determine residual ozone. On the other hand, the climatological method proposed by McPeters and Labow appears to give consistent results but is positively biased. The longer then two years series of measurements also was subjected to harmonic analysis to examine data cycles. These will be discussed as well.
The community health clinics as a learning context for student nurses.
Makupu, M B; Botes, A
2000-09-01
The purpose of the research study was to describe guidelines to improve the community health clinics as a learning context conductive to learning. The objectives of the study commenced by getting the perception of student nurses from a nursing college in Gauteng; community sisters from ten community health clinics in the Southern Metropolitan Local Council and college tutors from a college in Gauteng. The research design and method used, consisting of a qualitative, exploratory, descriptive and contextual approach and the design was divided into two phases. Phase one consisted of a field/empirical study and phase two of conceptualization. In all the samples follow-up focus group interviews were conducted to confirm the findings. To ensure trustworthiness, Lincoln and Guba's model (1985) was implemented and data analysis was according to Tesch's model (1990 in Creswell 1994:155) based on a qualitative approach. The conceptual framework discussed, indicating a body of knowledge, was based on the study and empirical findings from phase one to give clear meaning and understanding regarding the research study.
Who gives? Multilevel effects of gender and ethnicity on workplace charitable giving.
Leslie, Lisa M; Snyder, Mark; Glomb, Theresa M
2013-01-01
Research on diversity in organizations has largely focused on the implications of gender and ethnic differences for performance, to the exclusion of other outcomes. We propose that gender and ethnic differences also have implications for workplace charitable giving, an important aspect of corporate social responsibility. Drawing from social role theory, we hypothesize and find that gender has consistent effects across levels of analysis; women donate more money to workplace charity than do men, and the percentage of women in a work unit is positively related to workplace charity, at least among men. Alternatively and consistent with social exchange theory, we hypothesize and find that ethnicity has opposing effects across levels of analysis; ethnic minorities donate less money to workplace charity than do Whites, but the percentage of minorities in a work unit is positively related to workplace charity, particularly among minorities. The findings provide a novel perspective on the consequences of gender and ethnic diversity in organizations and highlight synergies between organizational efforts to increase diversity and to build a reputation for corporate social responsibility. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Hughes, Jane; Wilson, Wayne J; MacBean, Naomi; Hill, Anne E
2016-12-01
To develop a tool for assessing audiology students taking a case history and giving feedback with simulated patients (SP). Single observation, single group design. Twenty-four first-year audiology students, five simulated patients, two clinical educators, and three evaluators. The Audiology Simulated Patient Interview Rating Scale (ASPIRS) was developed consisting of six items assessing specific clinical skills, non-verbal communication, verbal communication, interpersonal skills, interviewing skills, and professional practice skills. These items are applied once for taking a case history and again for giving feedback. The ASPIRS showed very high internal consistency (α = 0.91-0.97; mean inter-item r = 0.64-0.85) and fair-to-moderate agreement between evaluators (29.2-54.2% exact and 79.2-100% near agreement; κ weighted up to 0.60). It also showed fair-to-moderate absolute agreement amongst evaluators for single evaluator scores (intraclass correlation coefficient [ICC] r = 0.35-0.59) and substantial consistency of agreement amongst evaluators for three-evaluator averaged scores (ICC r = 0.62-0.81). Factor analysis showed the ASPIRS' 12 items fell into two components, one containing all feedback items and one containing all case history items. The ASPIRS shows promise as the first published tool for assessing audiology students taking a case history and giving feedback with an SP.
Consistent compactification of double field theory on non-geometric flux backgrounds
NASA Astrophysics Data System (ADS)
Hassler, Falk; Lüst, Dieter
2014-05-01
In this paper, we construct non-trivial solutions to the 2 D-dimensional field equations of Double Field Theory (DFT) by using a consistent Scherk-Schwarz ansatz. The ansatz identifies 2( D - d) internal directions with a twist U M N which is directly connected to the covariant fluxes ABC . It exhibits 2( D - d) linear independent generalized Killing vectors K I J and gives rise to a gauged supergravity in d dimensions. We analyze the covariant fluxes and the corresponding gauged supergravity with a Minkowski vacuum. We calculate fluctuations around such vacua and show how they gives rise to massive scalars field and vectors field with a non-abelian gauge algebra. Because DFT is a background independent theory, these fields should directly correspond the string excitations in the corresponding background. For ( D - d) = 3 we perform a complete scan of all allowed covariant fluxes and find two different kinds of backgrounds: the single and the double elliptic case. The later is not T-dual to a geometric background and cannot be transformed to a geometric setting by a field redefinition either. While this background fulfills the strong constraint, it is still consistent with the Killing vectors depending on the coordinates and the winding coordinates, thereby giving a non-geometric patching. This background can therefore not be described in Supergravity or Generalized Geometry.
The Intergenerational Transmission of Generosity
Wilhelm, Mark O.; Brown, Eleanor; Rooney, Patrick M.; Steinberg, Richard
2008-01-01
This paper estimates the correlation between the generosity of parents and the generosity of their adult children using regression models of adult children’s charitable giving. New charitable giving data are collected in the Panel Study of Income Dynamics and used to estimate the regression models. The regression models are estimated using a wide variety of techniques and specification tests, and the strength of the intergenerational giving correlations are compared with intergenerational correlations in income, wealth, and consumption expenditure from the same sample using the same set of controls. We find the religious giving of parents and children to be strongly correlated, as strongly correlated as are their income and wealth. The correlation in the secular giving (e.g., giving to the United Way, educational institutions, for poverty relief) of parents and children is smaller, similar in magnitude to the intergenerational correlation in consumption. Parents’ religious giving is positively associated with children’s secular giving, but in a more limited sense. Overall, the results are consistent with generosity emerging at least in part from the influence of parental charitable behavior. In contrast to intergenerational models in which parental generosity towards their children can undo government transfer policy (Ricardian equivalence), these results suggest that parental generosity towards charitable organizations might reinforce government policies, such as tax incentives aimed at encouraging voluntary transfers. PMID:19802345
NASA Technical Reports Server (NTRS)
Haviland, J. K.; Yoo, Y. S.
1976-01-01
Expressions for calculation of subsonic and supersonic, steady and unsteady aerodynamic forces are derived, using the concept of aerodynamic elements applied to the downwash velocity potential method. Aerodynamic elements can be of arbitrary out of plane polygon shape, although numerical calculations are restricted to rectangular elements, and to the steady state case in the supersonic examples. It is suggested that the use of conforming, in place of rectangular elements, would give better results. Agreement with results for subsonic oscillating T tails is fair, but results do not converge as the number of collocation points is increased. This appears to be due to the form of expression used in the calculations. The methods derived are expected to facilitate automated flutter analysis on the computer. In particular, the aerodynamic element concept is consistent with finite element methods already used for structural analysis. The method is universal for the complete Mach number range, and, finally, the calculations can be arranged so that they do not have to be repeated completely for every reduced frequency.
Verification of experimental dynamic strength methods with atomistic ramp-release simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, Alexander P.; Brown, Justin L.; Lim, Hojun
Material strength and moduli can be determined from dynamic high-pressure ramp-release experiments using an indirect method of Lagrangian wave profile analysis of surface velocities. This method, termed self-consistent Lagrangian analysis (SCLA), has been difficult to calibrate and corroborate with other experimental methods. Using nonequilibrium molecular dynamics, we validate the SCLA technique by demonstrating that it accurately predicts the same bulk modulus, shear modulus, and strength as those calculated from the full stress tensor data, especially where strain rate induced relaxation effects and wave attenuation are small. We show here that introducing a hold in the loading profile at peak pressuremore » gives improved accuracy in the shear moduli and relaxation-adjusted strength by reducing the effect of wave attenuation. When rate-dependent effects coupled with wave attenuation are large, we find that Lagrangian analysis overpredicts the maximum unload wavespeed, leading to increased error in the measured dynamic shear modulus. Furthermore, these simulations provide insight into the definition of dynamic strength, as well as a plausible explanation for experimental disagreement in reported dynamic strength values.« less
Fast image interpolation via random forests.
Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui
2015-10-01
This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.
Glossary of reference terms for alternative test methods and their validation.
Ferrario, Daniele; Brustio, Roberta; Hartung, Thomas
2014-01-01
This glossary was developed to provide technical references to support work in the field of the alternatives to animal testing. It was compiled from various existing reference documents coming from different sources and is meant to be a point of reference on alternatives to animal testing. Giving the ever-increasing number of alternative test methods and approaches being developed over the last decades, a combination, revision, and harmonization of earlier published collections of terms used in the validation of such methods is required. The need to update previous glossary efforts came from the acknowledgement that new words have emerged with the development of new approaches, while others have become obsolete, and the meaning of some terms has partially changed over time. With this glossary we intend to provide guidance on issues related to the validation of new or updated testing methods consistent with current approaches. Moreover, because of new developments and technologies, a glossary needs to be a living, constantly updated document. An Internet-based version based on this compilation may be found at http://altweb.jhsph.edu/, allowing the addition of new material.
Verification of experimental dynamic strength methods with atomistic ramp-release simulations
NASA Astrophysics Data System (ADS)
Moore, Alexander P.; Brown, Justin L.; Lim, Hojun; Lane, J. Matthew D.
2018-05-01
Material strength and moduli can be determined from dynamic high-pressure ramp-release experiments using an indirect method of Lagrangian wave profile analysis of surface velocities. This method, termed self-consistent Lagrangian analysis (SCLA), has been difficult to calibrate and corroborate with other experimental methods. Using nonequilibrium molecular dynamics, we validate the SCLA technique by demonstrating that it accurately predicts the same bulk modulus, shear modulus, and strength as those calculated from the full stress tensor data, especially where strain rate induced relaxation effects and wave attenuation are small. We show here that introducing a hold in the loading profile at peak pressure gives improved accuracy in the shear moduli and relaxation-adjusted strength by reducing the effect of wave attenuation. When rate-dependent effects coupled with wave attenuation are large, we find that Lagrangian analysis overpredicts the maximum unload wavespeed, leading to increased error in the measured dynamic shear modulus. These simulations provide insight into the definition of dynamic strength, as well as a plausible explanation for experimental disagreement in reported dynamic strength values.
Crossing statistic: reconstructing the expansion history of the universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafieloo, Arman, E-mail: arman@ewha.ac.kr
2012-08-01
We present that by combining Crossing Statistic [1,2] and Smoothing method [3-5] one can reconstruct the expansion history of the universe with a very high precision without considering any prior on the cosmological quantities such as the equation of state of dark energy. We show that the presented method performs very well in reconstruction of the expansion history of the universe independent of the underlying models and it works well even for non-trivial dark energy models with fast or slow changes in the equation of state of dark energy. Accuracy of the reconstructed quantities along with independence of the methodmore » to any prior or assumption gives the proposed method advantages to the other non-parametric methods proposed before in the literature. Applying on the Union 2.1 supernovae combined with WiggleZ BAO data we present the reconstructed results and test the consistency of the two data sets in a model independent manner. Results show that latest available supernovae and BAO data are in good agreement with each other and spatially flat ΛCDM model is in concordance with the current data.« less
Verification of experimental dynamic strength methods with atomistic ramp-release simulations
Moore, Alexander P.; Brown, Justin L.; Lim, Hojun; ...
2018-05-04
Material strength and moduli can be determined from dynamic high-pressure ramp-release experiments using an indirect method of Lagrangian wave profile analysis of surface velocities. This method, termed self-consistent Lagrangian analysis (SCLA), has been difficult to calibrate and corroborate with other experimental methods. Using nonequilibrium molecular dynamics, we validate the SCLA technique by demonstrating that it accurately predicts the same bulk modulus, shear modulus, and strength as those calculated from the full stress tensor data, especially where strain rate induced relaxation effects and wave attenuation are small. We show here that introducing a hold in the loading profile at peak pressuremore » gives improved accuracy in the shear moduli and relaxation-adjusted strength by reducing the effect of wave attenuation. When rate-dependent effects coupled with wave attenuation are large, we find that Lagrangian analysis overpredicts the maximum unload wavespeed, leading to increased error in the measured dynamic shear modulus. Furthermore, these simulations provide insight into the definition of dynamic strength, as well as a plausible explanation for experimental disagreement in reported dynamic strength values.« less
NASA Astrophysics Data System (ADS)
Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.
2003-04-01
Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.
Marshall, N W
2001-06-01
This paper applies a published version of signal detection theory to x-ray image intensifier fluoroscopy data and compares the results with more conventional subjective image quality measures. An eight-bit digital framestore was used to acquire temporally contiguous frames of fluoroscopy data from which the modulation transfer function (MTF(u)) and noise power spectrum were established. These parameters were then combined to give detective quantum efficiency (DQE(u)) and used in conjunction with signal detection theory to calculate contrast-detail performance. DQE(u) was found to lie between 0.1 and 0.5 for a range of fluoroscopy systems. Two separate image quality experiments were then performed in order to assess the correspondence between the objective and subjective methods. First, image quality for a given fluoroscopy system was studied as a function of doserate using objective parameters and a standard subjective contrast-detail method. Following this, the two approaches were used to assess three different fluoroscopy units. Agreement between objective and subjective methods was good; doserate changes were modelled correctly while both methods ranked the three systems consistently.
Statistical Evaluation of Biometric Evidence in Forensic Automatic Speaker Recognition
NASA Astrophysics Data System (ADS)
Drygajlo, Andrzej
Forensic speaker recognition is the process of determining if a specific individual (suspected speaker) is the source of a questioned voice recording (trace). This paper aims at presenting forensic automatic speaker recognition (FASR) methods that provide a coherent way of quantifying and presenting recorded voice as biometric evidence. In such methods, the biometric evidence consists of the quantified degree of similarity between speaker-dependent features extracted from the trace and speaker-dependent features extracted from recorded speech of a suspect. The interpretation of recorded voice as evidence in the forensic context presents particular challenges, including within-speaker (within-source) variability and between-speakers (between-sources) variability. Consequently, FASR methods must provide a statistical evaluation which gives the court an indication of the strength of the evidence given the estimated within-source and between-sources variabilities. This paper reports on the first ENFSI evaluation campaign through a fake case, organized by the Netherlands Forensic Institute (NFI), as an example, where an automatic method using the Gaussian mixture models (GMMs) and the Bayesian interpretation (BI) framework were implemented for the forensic speaker recognition task.
Wills, Olivia; Reynolds, Gemma; Puustinen-Hopper, Kaisa; Roberts, Jennifer
2018-01-01
In this paper we explored the effects of exposure to images of the suffering and vulnerability of others on altruistic, trust-based, and reciprocated incentivized economic decisions, accounting for differences in participants’ dispositional empathy and reported in-group trust for their recipient(s). This was done using a pictorial priming task, framed as a memory test, and a triadic economic game design. Using the largest experimental sample to date to explore this issue, our integrated analysis of two online experiments (total N = 519), found statistically consistent evidence that exposure to images of suffering and vulnerability (vs. neutral images) increased altruistic in-group giving as measured by the “triple dictator game”, and that the manipulation was significantly more effective in those who reported lower trust for their recipients. The experimental manipulation also significantly increased altruistic giving in the standard “dictator game” and trust-based giving in the “investment game”, but only in those who were lower in in-group trust and also high in affective or cognitive empathy. Complementary qualitative evidence revealed the strongest motivations associated with increased giving in the experimental condition were greater assumed reciprocation and a lower aversion to risk. However, no consistent effects of the experimental manipulation on participants’ reciprocated decisions were observed. These findings suggest that, as well as altruistic decision-making in the “triple dictator game”, collaboratively witnessing the suffering of others may heighten trust-based in-group giving in the “investment game” for some people, but the effects are heterogeneous and sensitive to context. PMID:29561883
Powell, Philip A; Wills, Olivia; Reynolds, Gemma; Puustinen-Hopper, Kaisa; Roberts, Jennifer
2018-01-01
In this paper we explored the effects of exposure to images of the suffering and vulnerability of others on altruistic, trust-based, and reciprocated incentivized economic decisions, accounting for differences in participants' dispositional empathy and reported in-group trust for their recipient(s). This was done using a pictorial priming task, framed as a memory test, and a triadic economic game design. Using the largest experimental sample to date to explore this issue, our integrated analysis of two online experiments (total N = 519), found statistically consistent evidence that exposure to images of suffering and vulnerability (vs. neutral images) increased altruistic in-group giving as measured by the "triple dictator game", and that the manipulation was significantly more effective in those who reported lower trust for their recipients. The experimental manipulation also significantly increased altruistic giving in the standard "dictator game" and trust-based giving in the "investment game", but only in those who were lower in in-group trust and also high in affective or cognitive empathy. Complementary qualitative evidence revealed the strongest motivations associated with increased giving in the experimental condition were greater assumed reciprocation and a lower aversion to risk. However, no consistent effects of the experimental manipulation on participants' reciprocated decisions were observed. These findings suggest that, as well as altruistic decision-making in the "triple dictator game", collaboratively witnessing the suffering of others may heighten trust-based in-group giving in the "investment game" for some people, but the effects are heterogeneous and sensitive to context.
NASA Astrophysics Data System (ADS)
Kim, Minkyu; Chang, Jaeeon; Sandler, Stanley I.
2014-02-01
Accurate values of the free energies of C60 and C70 fullerene crystals are obtained using expanded ensemble method and acceptance ratio method combined with the Einstein-molecule approach. Both simulation methods, when tested for Lennard-Jones crystals, give accurate results of the free energy differing from each other in the fifth significant digit. The solid-solid phase transition temperature of C60 crystal is determined from free energy profiles, and found to be 260 K, which is in good agreement with experiment. For C70 crystal, using the potential model of Sprik et al. [Phys. Rev. Lett. 69, 1660 (1992)], low-temperature solid-solid phase transition temperature is found to be 160 K determined from the free energy profiles. Whereas this is somewhat lower than the experimental value, it is in agreement with conventional molecular simulations, which validates the methodological consistency of the present simulation method. From the calculations of the free energies of C60 and C70 crystals, we note the significance of symmetry number for crystal phase needed to properly account for the indistinguishability of orientationally disordered states.
Perceptions of Teaching Methods for Preclinical Oral Surgery: A Comparison with Learning Styles
Omar, Esam
2017-01-01
Purpose: Dental extraction is a routine part of clinical dental practice. For this reason, understanding the way how students’ extraction knowledge and skills development are important. Problem Statement and Objectives: To date, there is no accredited statement about the most effective method for the teaching of exodontia to dental students. Students have different abilities and preferences regarding how they learn and process information. This is defined as learning style. In this study, the effectiveness of active learning in the teaching of preclinical oral surgery was examined. The personality type of the groups involved in this study was determined, and the possible effect of personality type on learning style was investigated. Method: This study was undertaken over five years from 2011 to 2015. The sample consisted of 115 students and eight staff members. Questionnaires were submitted by 68 students and all eight staff members involved. Three measures were used in the study: The Index of Learning Styles (Felder and Soloman, 1991), the Myers-Briggs Type Indicator (MBTI), and the styles of learning typology (Grasha and Hruska-Riechmann). Results and Discussion: Findings indicated that demonstration and minimal clinical exposure give students personal validation. Frequent feedback on their work is strongly indicated to build the cognitive, psychomotor, and interpersonal skills needed from preclinical oral surgery courses. Conclusion: Small group cooperative active learning in the form of demonstration and minimal clinical exposure that gives frequent feedback and students’ personal validation on their work is strongly indicated to build the skills needed for preclinical oral surgery courses. PMID:28357004
Lepora, Nathan F; Blomeley, Craig P; Hoyland, Darren; Bracci, Enrico; Overton, Paul G; Gurney, Kevin
2011-11-01
The study of active and passive neuronal dynamics usually relies on a sophisticated array of electrophysiological, staining and pharmacological techniques. We describe here a simple complementary method that recovers many findings of these more complex methods but relies only on a basic patch-clamp recording approach. Somatic short and long current pulses were applied in vitro to striatal medium spiny (MS) and fast spiking (FS) neurons from juvenile rats. The passive dynamics were quantified by fitting two-compartment models to the short current pulse data. Lumped conductances for the active dynamics were then found by compensating this fitted passive dynamics within the current-voltage relationship from the long current pulse data. These estimated passive and active properties were consistent with previous more complex estimations of the neuron properties, supporting the approach. Relationships within the MS and FS neuron types were also evident, including a graduation of MS neuron properties consistent with recent findings about D1 and D2 dopamine receptor expression. Application of the method to simulated neuron data supported the hypothesis that it gives reasonable estimates of membrane properties and gross morphology. Therefore detailed information about the biophysics can be gained from this simple approach, which is useful for both classification of neuron type and biophysical modelling. Furthermore, because these methods rely upon no manipulations to the cell other than patch clamping, they are ideally suited to in vivo electrophysiology. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Light scattering method to measure red blood cell aggregation during incubation
NASA Astrophysics Data System (ADS)
Grzegorzewski, B.; Szołna-Chodór, A.; Baryła, J.; DreŻek, D.
2018-01-01
Red blood cell (RBC) aggregation can be observed both in vivo as well as in vitro. This process is a cause of alterations of blood flow in microvascular network. Enhanced RBC aggregation makes oxygen and nutrients delivery difficult. Measurements of RBC aggregation usually give a description of the process for a sample where the state of a solution and cells is well-defined and the system reached an equilibrium. Incubation of RBCs in various solutions is frequently used to study the effects of the solutions on the RBC aggregation. The aggregation parameters are compared before and after incubation while the detailed changes of the parameters during incubation remain unknown. In this paper we have proposed a method to measure red blood cell aggregation during incubation based on the well-known technique where backscattered light is used to assess the parameters of the RBC aggregation. Couette system consisting of two cylinders is adopted in the method. The incubation is observed in the Couette system. In the proposed method following sequence of rotations is adapted. Two minutes rotation is followed by two minutes stop. In this way we have obtained a time series of back scattered intensity consisting of signals respective for disaggregation and aggregation. It is shown that the temporal changes of the intensity manifest changes of RBC aggregation during incubation. To show the ability of the method to assess the effect of incubation time on RBC aggregation the results are shown for solutions that cause an increase of RBC aggregation as well as for the case where the aggregation is decreased.
Can the comet assay be used reliably to detect nanoparticle-induced genotoxicity?
Karlsson, Hanna L; Di Bucchianico, Sebastiano; Collins, Andrew R; Dusinska, Maria
2015-03-01
The comet assay is a sensitive method to detect DNA strand breaks as well as oxidatively damaged DNA at the level of single cells. Today the assay is commonly used in nano-genotoxicology. In this review we critically discuss possible interactions between nanoparticles (NPs) and the comet assay. Concerns for such interactions have arisen from the occasional observation of NPs in the "comet head", which implies that NPs may be present while the assay is being performed. This could give rise to false positive or false negative results, depending on the type of comet assay endpoint and NP. For most NPs, an interaction that substantially impacts the comet assay results is unlikely. For photocatalytically active NPs such as TiO2 , on the other hand, exposure to light containing UV can lead to increased DNA damage. Samples should therefore not be exposed to such light. By comparing studies in which both the comet assay and the micronucleus assay have been used, a good consistency between the assays was found in general (69%); consistency was even higher when excluding studies on TiO2 NPs (81%). The strong consistency between the comet and micronucleus assays for a range of different NPs-even though the two tests measure different endpoints-implies that both can be trusted in assessing the genotoxicity of NPs, and that both could be useful in a standard battery of test methods. © 2014 Wiley Periodicals, Inc.
OSI SAF Sea Surface Temperature reprocessing of MSG/SEVIRI archive.
NASA Astrophysics Data System (ADS)
Saux Picart, Stéphane; Legendre, Gerard; Marsouin, Anne; Péré, Sonia; Roquet, Hervé
2017-04-01
The Ocean and Sea-Ice Satellite Application Facility (OSI-SAF) of the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) is planning to deliver a reprocessing of Sea Surface Temperature (SST) from Spinning Enhanced Visible and Infrared Imager/Meteosat Second Generation (SEVIRI/MSG) archive (2004-2012) by the end of 2016. This reprocessing is drawing from experiences of the OSI SAF team in near real time processing of MSG/SEVIRI data. The retrieval method consist in a non-linear split-window algorithm including the algorithm correction scheme developed by Le Borgne et al. (2011). The bias correction relies on simulations of infrared brightness temperatures performed using Numerical Weather Prediction model atmospheric profiles of water vapour and temperature, and RTTOV radiative transfer model. The cloud mask used is the Climate SAF reprocessing of the MSG/SEVIRI archive. It is consistent over the period in consideration. Atmospheric Saharan dusts have a strong impact on the retrieved SST, they are taken into consideration through the computation of the Saharan Dust Index (Merchant et al., 2006) which is then used to determine an empirical correction applied to SST. The MSG/SEVIRI SST reprocessing dataset consist in hourly level 3 composite of sub-skin temperature projected onto a regular 0.05° grid over the region delimited by 60N,60S and 60W,60E. This presentation gives an overview of the data and methods used for the reprocessing, the products and validation results against drifting buoys measurements extracted from the ERA Clim dataset.
Neurocultural evidence that ideal affect match promotes giving
Park, BoKyung; Blevins, Elizabeth; Knutson, Brian
2017-01-01
Abstract Why do people give to strangers? We propose that people trust and give more to those whose emotional expressions match how they ideally want to feel (“ideal affect match”). European Americans and Koreans played multiple trials of the Dictator Game with recipients who varied in emotional expression (excited, calm), race (White, Asian) and sex (male, female). Consistent with their culture’s valued affect, European Americans trusted and gave more to excited than calm recipients, whereas Koreans trusted and gave more to calm than excited recipients. These findings held regardless of recipient race and sex. We then used fMRI to probe potential affective and mentalizing mechanisms. Increased activity in the nucleus accumbens (associated with reward anticipation) predicted giving, as did decreased activity in the right temporo-parietal junction (rTPJ; associated with reduced belief prediction error). Ideal affect match decreased rTPJ activity, suggesting that people may trust and give more to strangers whom they perceive to share their affective values. PMID:28379542
Neurocultural evidence that ideal affect match promotes giving.
Park, BoKyung; Blevins, Elizabeth; Knutson, Brian; Tsai, Jeanne L
2017-07-01
Why do people give to strangers? We propose that people trust and give more to those whose emotional expressions match how they ideally want to feel ("ideal affect match"). European Americans and Koreans played multiple trials of the Dictator Game with recipients who varied in emotional expression (excited, calm), race (White, Asian) and sex (male, female). Consistent with their culture's valued affect, European Americans trusted and gave more to excited than calm recipients, whereas Koreans trusted and gave more to calm than excited recipients. These findings held regardless of recipient race and sex. We then used fMRI to probe potential affective and mentalizing mechanisms. Increased activity in the nucleus accumbens (associated with reward anticipation) predicted giving, as did decreased activity in the right temporo-parietal junction (rTPJ; associated with reduced belief prediction error). Ideal affect match decreased rTPJ activity, suggesting that people may trust and give more to strangers whom they perceive to share their affective values. © The Author (2017). Published by Oxford University Press.
... Give your child one warning (unless it is aggression). If it happens again, send her to the ... are less likely to be consistent. Spanking increases aggression and anger instead of teaching responsibility. Parents may ...
Zhang, Zhongyang; Hao, Ke
2015-11-01
Cancer genomes exhibit profound somatic copy number alterations (SCNAs). Studying tumor SCNAs using massively parallel sequencing provides unprecedented resolution and meanwhile gives rise to new challenges in data analysis, complicated by tumor aneuploidy and heterogeneity as well as normal cell contamination. While the majority of read depth based methods utilize total sequencing depth alone for SCNA inference, the allele specific signals are undervalued. We proposed a joint segmentation and inference approach using both signals to meet some of the challenges. Our method consists of four major steps: 1) extracting read depth supporting reference and alternative alleles at each SNP/Indel locus and comparing the total read depth and alternative allele proportion between tumor and matched normal sample; 2) performing joint segmentation on the two signal dimensions; 3) correcting the copy number baseline from which the SCNA state is determined; 4) calling SCNA state for each segment based on both signal dimensions. The method is applicable to whole exome/genome sequencing (WES/WGS) as well as SNP array data in a tumor-control study. We applied the method to a dataset containing no SCNAs to test the specificity, created by pairing sequencing replicates of a single HapMap sample as normal/tumor pairs, as well as a large-scale WGS dataset consisting of 88 liver tumors along with adjacent normal tissues. Compared with representative methods, our method demonstrated improved accuracy, scalability to large cancer studies, capability in handling both sequencing and SNP array data, and the potential to improve the estimation of tumor ploidy and purity.
Ferenczy, György G
2013-04-05
The application of the local basis equation (Ferenczy and Adams, J. Chem. Phys. 2009, 130, 134108) in mixed quantum mechanics/molecular mechanics (QM/MM) and quantum mechanics/quantum mechanics (QM/QM) methods is investigated. This equation is suitable to derive local basis nonorthogonal orbitals that minimize the energy of the system and it exhibits good convergence properties in a self-consistent field solution. These features make the equation appropriate to be used in mixed QM/MM and QM/QM methods to optimize orbitals in the field of frozen localized orbitals connecting the subsystems. Calculations performed for several properties in divers systems show that the method is robust with various choices of the frozen orbitals and frontier atom properties. With appropriate basis set assignment, it gives results equivalent with those of a related approach [G. G. Ferenczy previous paper in this issue] using the Huzinaga equation. Thus, the local basis equation can be used in mixed QM/MM methods with small size quantum subsystems to calculate properties in good agreement with reference Hartree-Fock-Roothaan results. It is shown that bond charges are not necessary when the local basis equation is applied, although they are required for the self-consistent field solution of the Huzinaga equation based method. Conversely, the deformation of the wave-function near to the boundary is observed without bond charges and this has a significant effect on deprotonation energies but a less pronounced effect when the total charge of the system is conserved. The local basis equation can also be used to define a two layer quantum system with nonorthogonal localized orbitals surrounding the central delocalized quantum subsystem. Copyright © 2013 Wiley Periodicals, Inc.
Zhang, Zhongyang; Hao, Ke
2015-01-01
Cancer genomes exhibit profound somatic copy number alterations (SCNAs). Studying tumor SCNAs using massively parallel sequencing provides unprecedented resolution and meanwhile gives rise to new challenges in data analysis, complicated by tumor aneuploidy and heterogeneity as well as normal cell contamination. While the majority of read depth based methods utilize total sequencing depth alone for SCNA inference, the allele specific signals are undervalued. We proposed a joint segmentation and inference approach using both signals to meet some of the challenges. Our method consists of four major steps: 1) extracting read depth supporting reference and alternative alleles at each SNP/Indel locus and comparing the total read depth and alternative allele proportion between tumor and matched normal sample; 2) performing joint segmentation on the two signal dimensions; 3) correcting the copy number baseline from which the SCNA state is determined; 4) calling SCNA state for each segment based on both signal dimensions. The method is applicable to whole exome/genome sequencing (WES/WGS) as well as SNP array data in a tumor-control study. We applied the method to a dataset containing no SCNAs to test the specificity, created by pairing sequencing replicates of a single HapMap sample as normal/tumor pairs, as well as a large-scale WGS dataset consisting of 88 liver tumors along with adjacent normal tissues. Compared with representative methods, our method demonstrated improved accuracy, scalability to large cancer studies, capability in handling both sequencing and SNP array data, and the potential to improve the estimation of tumor ploidy and purity. PMID:26583378
Seymour, Brittany; Yang, Helen; Getman, Rebekah; Barrow, Jane; Kalenderian, Elsbeth
2016-06-01
In today's digital era, people are increasingly relying on the Internet-including social media-to access health information and inform their health decisions. This article describes an exploratory initiative to better understand and define the role of dentists in patient education in the context of e-patients and Health 2.0. This initiative consisted of four phases. In Phase I, an interdisciplinary expert advisory committee was assembled for a roundtable discussion about patients' health information-seeking behaviors online. In Phase II, a pilot case study was conducted, with methods and analysis informed by Phase I recommendations. Phase III consisted of a debriefing conference to outline future areas of research on modernizing health communication strategies. In Phase IV, the findings and working theories were presented to 75 dental students, who then took a survey regarding their perspectives with the objective of guiding potential curriculum design for predoctoral courses. The results of the survey showed that the validity of online content was often secondary to the strength of the network sharing it and that advocacy online could be more effective if it allowed for emotional connections with peers rather than preserving accuracy of the information. Students expressed high interest in learning how to harness modern health communications in their clinical care since the role of the dentist is evolving from giving information to giving personalized guidance against the backdrop of an often contradictory modern information environment. The authors recommend that the dental profession develop patient-centered health communication training for predoctoral students and professional development and continuing education for practicing professionals.
Determining the optimal number of Kanban in multi-products supply chain system
NASA Astrophysics Data System (ADS)
Widyadana, G. A.; Wee, H. M.; Chang, Jer-Yuan
2010-02-01
Kanban, a key element of just-in-time system, is a re-order card or signboard giving instruction or triggering the pull system to manufacture or supply a component based on actual usage of material. There are two types of Kanban: production Kanban and withdrawal Kanban. This study uses optimal and meta-heuristic methods to determine the Kanban quantity and withdrawal lot sizes in a supply chain system. Although the mix integer programming method gives an optimal solution, it is not time efficient. For this reason, the meta-heuristic methods are suggested. In this study, a genetic algorithm (GA) and a hybrid of genetic algorithm and simulated annealing (GASA) are used. The study compares the performance of GA and GASA with that of the optimal method using MIP. The given problems show that both GA and GASA result in a near optimal solution, and they outdo the optimal method in term of run time. In addition, the GASA heuristic method gives a better performance than the GA heuristic method.
Holographic entanglement for Chern-Simons terms
NASA Astrophysics Data System (ADS)
Azeyanagi, Tatsuo; Loganayagam, R.; Ng, Gim Seng
2017-02-01
We derive the holographic entanglement entropy contribution from pure and mixed gravitational Chern-Simons(CS) terms in AdS2 k+1. This is done through two different methods: first, by a direct evaluation of CS action in a holographic replica geometry and second by a descent of Dong's derivation applied to the corresponding anomaly polynomial. In lower dimensions ( k = 1 , 2), the formula coincides with the Tachikawa formula for black hole entropy from gravitational CS terms. New extrinsic curvature corrections appear for k ≥ 3: we give explicit and concise expressions for the two pure gravitational CS terms in AdS7 and present various consistency checks, including agreements with the black hole entropy formula when evaluated at the bifurcation surface.
Recursive computation of mutual potential between two polyhedra
NASA Astrophysics Data System (ADS)
Hirabayashi, Masatoshi; Scheeres, Daniel J.
2013-11-01
Recursive computation of mutual potential, force, and torque between two polyhedra is studied. Based on formulations by Werner and Scheeres (Celest Mech Dyn Astron 91:337-349, 2005) and Fahnestock and Scheeres (Celest Mech Dyn Astron 96:317-339, 2006) who applied the Legendre polynomial expansion to gravity interactions and expressed each order term by a shape-dependent part and a shape-independent part, this paper generalizes the computation of each order term, giving recursive relations of the shape-dependent part. To consider the potential, force, and torque, we introduce three tensors. This method is applicable to any multi-body systems. Finally, we implement this recursive computation to simulate the dynamics of a two rigid-body system that consists of two equal-sized parallelepipeds.
NASA Astrophysics Data System (ADS)
Vlasov, Vladimir; Pikovsky, Arkady; Macau, Elbert E. N.
2015-12-01
We analyze star-type networks of phase oscillators by virtue of two methods. For identical oscillators we adopt the Watanabe-Strogatz approach, which gives full analytical description of states, rotating with constant frequency. For nonidentical oscillators, such states can be obtained by virtue of the self-consistent approach in a parametric form. In this case stability analysis cannot be performed, however with the help of direct numerical simulations we show which solutions are stable and which not. We consider this system as a model for a drum orchestra, where we assume that the drummers follow the signal of the leader without listening to each other and the coupling parameters are determined by a geometrical organization of the orchestra.
NASA Astrophysics Data System (ADS)
Nenashev, A. V.; Koshkarev, A. A.; Dvurechenskii, A. V.
2018-03-01
We suggest an approach to the analytical calculation of the strain distribution due to an inclusion in elastically anisotropic media for the case of cubic anisotropy. The idea consists in the approximate reduction of the anisotropic problem to a (simpler) isotropic problem. This gives, for typical semiconductors, an improvement in accuracy by an order of magnitude, compared to the isotropic approximation. Our method allows using, in the case of elastically anisotropic media, analytical solutions obtained for isotropic media only, such as analytical formulas for the strain due to polyhedral inclusions. The present work substantially extends the applicability of analytical results, making them more suitable for describing real systems, such as epitaxial quantum dots.
Food and fluid intake of the SENECA population residing in Romans, France.
Ferry, M; Hininger-Favier, I; Sidobre, B; Mathey, M F
2001-01-01
to provide information and data on food and fluid intake of free-living elderly aged of 81-86 years old residing in the south of France. using standardised methods data were collected from a random sample born between 1913 and 1918. The French study protocol again included data collection on dietary intake using a standardised modified dietary history consisting of a food frequency list and a 3-day estimated dietary record. Total dietary intake was generally low as compared to the recommended daily intake for elderly subjects. This descriptive part of the SENECA study gives the opportunity to have information on this growing segment of the population. These results should help to adapt the dietary guidelines for this category of the population.
1D gasdynamics of wind-blown bubbles: effects of thermal conduction
NASA Astrophysics Data System (ADS)
Zhekov, S. A.; Myasnikov, A. V.
1998-03-01
Gasdynamic properties of the wind-blown bubbles are considered in the framework of the 1D spherically symmetric flow. The model self-consistently takes into account the optically-thin-plasma cooling and the electron thermal conduction. The numerical method used in calculations is described in details. A comparison with the existing self-similar solution is provided. It is shown that the self-similar solution gives a relatively well representation of the hot-bubble interior and could be used for estimations of some of its spectral characteristics. However, it is also shown that the thermal conduction in combination with the cooling may cause additional multiple shocks to appear in the interaction region and the analysis of the nature of these shocks is provided.
Logistic regression of family data from retrospective study designs.
Whittemore, Alice S; Halpern, Jerry
2003-11-01
We wish to study the effects of genetic and environmental factors on disease risk, using data from families ascertained because they contain multiple cases of the disease. To do so, we must account for the way participants were ascertained, and for within-family correlations in both disease occurrences and covariates. We model the joint probability distribution of the covariates of ascertained family members, given family disease occurrence and pedigree structure. We describe two such covariate models: the random effects model and the marginal model. Both models assume a logistic form for the distribution of one person's covariates that involves a vector beta of regression parameters. The components of beta in the two models have different interpretations, and they differ in magnitude when the covariates are correlated within families. We describe ascertainment assumptions needed to estimate consistently the parameters beta(RE) in the random effects model and the parameters beta(M) in the marginal model. Under the ascertainment assumptions for the random effects model, we show that conditional logistic regression (CLR) of matched family data gives a consistent estimate beta(RE) for beta(RE) and a consistent estimate for the covariance matrix of beta(RE). Under the ascertainment assumptions for the marginal model, we show that unconditional logistic regression (ULR) gives a consistent estimate for beta(M), and we give a consistent estimator for its covariance matrix. The random effects/CLR approach is simple to use and to interpret, but it can use data only from families containing both affected and unaffected members. The marginal/ULR approach uses data from all individuals, but its variance estimates require special computations. A C program to compute these variance estimates is available at http://www.stanford.edu/dept/HRP/epidemiology. We illustrate these pros and cons by application to data on the effects of parity on ovarian cancer risk in mother/daughter pairs, and use simulations to study the performance of the estimates. Copyright 2003 Wiley-Liss, Inc.
MODELING THE NEAR-UV BAND OF GK STARS. II. NON-LTE MODELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ian Short, C.; Campbell, Eamonn A.; Pickup, Heather
We present a grid of atmospheric models and synthetic spectral energy distributions (SEDs) for late-type dwarfs and giants of solar and 1/3 solar metallicity with many opacity sources computed in self-consistent non-local thermodynamic equilibrium (NLTE), and compare them to the LTE grid of Short and Hauschildt (Paper I). We describe, for the first time, how the NLTE treatment affects the thermal equilibrium of the atmospheric structure (T({tau}) relation) and the SED as a finely sampled function of T{sub eff}, log g, and [A/H] among solar metallicity and mildly metal-poor red giants. We compare the computed SEDs to the library ofmore » observed spectrophotometry described in Paper I across the entire visible band, and in the blue and red regions of the spectrum separately. We find that for the giants of both metallicities, the NLTE models yield best-fit T{sub eff} values that are 30-90 K lower than those provided by LTE models, while providing greater consistency between log g values, and, for Arcturus, T{sub eff} values, fitted separately to the blue and red spectral regions. There is marginal evidence that NLTE models give more consistent best-fit T{sub eff} values between the red and blue bands for earlier spectral classes among the solar metallicity GK giants than they do for the later classes, but no model fits the blue-band spectrum well for any class. For the two dwarf spectral classes that we are able to study, the effect of NLTE on derived parameters is less significant. We compare our derived T{sub eff} values to several other spectroscopic and photometric T{sub eff} calibrations for red giants, including one that is less model dependent based on the infrared flux method (IRFM). We find that the NLTE models provide slightly better agreement to the IRFM calibration among the warmer stars in our sample, while giving approximately the same level of agreement for the cooler stars.« less
Consistent assignment of the vibrations of symmetric and asymmetric ortho-disubstituted benzenes
NASA Astrophysics Data System (ADS)
Tuttle, William D.; Gardner, Adrian M.; Andrejeva, Anna; Kemp, David J.; Wakefield, Jonathan C. A.; Wright, Timothy G.
2018-02-01
The form of molecular vibrations, and changes in these, give valuable insights into geometric and electronic structure upon electronic excitation or ionization, and within families of molecules. Here, we give a description of the phenyl-ring-localized vibrational modes of the ground (S0) electronic states of a wide range of ortho-disubstituted benzene molecules including both symmetrically- and asymmetrically-substituted cases. We conclude that the use of the commonly-used Wilson or Varsányi mode labels, which are based on the vibrational motions of benzene itself, is misleading and ambiguous. In addition, we also find the use of the Mi labels for monosubstituted benzenes [A.M. Gardner, T.G. Wright. J. Chem. Phys. 135 (2011) 114305], or the recently-suggested labels for para-disubstituted benzenes [A. Andrejeva, A.M. Gardner, W.D. Tuttle, T.G. Wright, J. Molec. Spectrosc. 321, 28 (2016)] are not appropriate. Instead, we label the modes consistently based upon the Mulliken (Herzberg) method for the modes of ortho-difluorobenzene (pDFB) under Cs symmetry, since we wish the labelling scheme to cover both symmetrically- and asymmetrically-substituted molecules. By studying the vibrational wavenumbers from the same force field while varying the mass of the substituent, we are able to identify the corresponding modes across a wide range of molecules and hence provide consistent assignments. We assign the vibrations of the following sets of molecules: the symmetric o-dihalobenzenes, o-xylene and catechol (o-dihydroxybenzene); and the asymmetric o-dihalobenzenes, o-halotoluenes, o-halophenols and o-cresol. In the symmetrically-substituted species, we find a pair of in-phase and out-of-phase carbon-substituent stretches, and this motion persists in asymmetrically-substituted molecules for heavier substituents. When at least one of the substituents is light, then we find that these evolve into localized carbon-substituent stretches.
Giving Back: Exploring Service-Learning in an Online Learning Environment
ERIC Educational Resources Information Center
McWhorter, Rochell R.; Delello, Julie A.; Roberts, Paul B.
2016-01-01
Service-Learning (SL) as an instructional method is growing in popularity for giving back to the community while connecting the experience to course content. However, little has been published on using SL for online business students. This study highlights an exploratory mixed-methods, multiple case study of an online business leadership and…
Sumiya, Yosuke; Nagahata, Yutaka; Komatsuzaki, Tamiki; Taketsugu, Tetsuya; Maeda, Satoshi
2015-12-03
The significance of kinetic analysis as a tool for understanding the reactivity and selectivity of organic reactions has recently been recognized. However, conventional simulation approaches that solve rate equations numerically are not amenable to multistep reaction profiles consisting of fast and slow elementary steps. Herein, we present an efficient and robust approach for evaluating the overall rate constants of multistep reactions via the recursive contraction of the rate equations to give the overall rate constants for the products and byproducts. This new method was applied to the Claisen rearrangement of allyl vinyl ether, as well as a substituted allyl vinyl ether. Notably, the profiles of these reactions contained 23 and 84 local minima, and 66 and 278 transition states, respectively. The overall rate constant for the Claisen rearrangement of allyl vinyl ether was consistent with the experimental value. The selectivity of the Claisen rearrangement reaction has also been assessed using a substituted allyl vinyl ether. The results of this study showed that the conformational entropy in these flexible chain molecules had a substantial impact on the overall rate constants. This new method could therefore be used to estimate the overall rate constants of various other organic reactions involving flexible molecules.
A 3T Sodium and Proton Composite Array Breast Coil
Kaggie, Joshua D.; Hadley, J. Rock; Badal, James; Campbell, John R.; Park, Daniel J.; Parker, Dennis L.; Morrell, Glen; Newbould, Rexford D.; Wood, Ali F.; Bangerter, Neal K.
2013-01-01
Purpose The objective of this study was to determine whether a sodium phased array would improve sodium breast MRI at 3T. The secondary objective was to create acceptable proton images with the sodium phased array in place. Methods A novel composite array for combined proton/sodium 3T breast MRI is compared to a coil with a single proton and sodium channel. The composite array consists of a 7-channel sodium receive array, a larger sodium transmit coil, and a 4-channel proton transceive array. The new composite array design utilizes smaller sodium receive loops than typically used in sodium imaging, uses novel decoupling methods between the receive loops and transmit loops, and uses a novel multi-channel proton transceive coil. The proton transceive coil reduces coupling between proton and sodium elements by intersecting the constituent loops to reduce their mutual inductance. The coil used for comparison consists of a concentric sodium and proton loop with passive decoupling traps. Results The composite array coil demonstrates a 2–5x improvement in SNR for sodium imaging and similar SNR for proton imaging when compared to a simple single-loop dual resonant design. Conclusion The improved SNR of the composite array gives breast sodium images of unprecedented quality in reasonable scan times. PMID:24105740
NASA Technical Reports Server (NTRS)
Tam, Christopher; Krothapalli, A
1993-01-01
The research program for the first year of this project (see the original research proposal) consists of developing an explicit marching scheme for solving the parabolized stability equations (PSE). Performing mathematical analysis of the computational algorithm including numerical stability analysis and the determination of the proper boundary conditions needed at the boundary of the computation domain are implicit in the task. Before one can solve the parabolized stability equations for high-speed mixing layers, the mean flow must first be found. In the past, instability analysis of high-speed mixing layer has mostly been performed on mean flow profiles calculated by the boundary layer equations. In carrying out this project, it is believed that the boundary layer equations might not give an accurate enough nonparallel, nonlinear mean flow needed for parabolized stability analysis. A more accurate mean flow can, however, be found by solving the parabolized Navier-Stokes equations. The advantage of the parabolized Navier-Stokes equations is that its accuracy is consistent with the PSE method. Furthermore, the method of solution is similar. Hence, the major part of the effort of the work of this year has been devoted to the development of an explicit numerical marching scheme for the solution of the Parabolized Navier-Stokes equation as applied to the high-seed mixing layer problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eaton, David J., E-mail: davideaton@nhs.net; Best, Bronagh; Brew-Graves, Chris
Purpose: In vivo dosimetry provides an independent check of delivered dose and gives confidence in the introduction or consistency of radiotherapy techniques. Single-fraction intraoperative radiotherapy of the breast can be performed with the Intrabeam compact, mobile 50 kV x-ray source (Carl Zeiss Surgical, Oberkochen, Germany). Thermoluminescent dosimeters (TLDs) can be used to estimate skin doses during these treatments. Methods and Materials: Measurements of skin doses were taken using TLDs for 72 patients over 3 years of clinical treatments. Phantom studies were also undertaken to assess the uncertainties resulting from changes in beam quality and backscatter conditions in vivo. Results: Themore » mean measured skin dose was 2.9 {+-} 1.6 Gy, with 11% of readings higher than the prescription dose of 6 Gy, but none of these patients showed increased complications. Uncertainties due to beam hardening and backscatter reduction were small compared with overall accuracy. Conclusions: TLDs are a useful and effective method to measure in vivo skin doses in intraoperative radiotherapy and are recommended for the initial validation or any modification to the delivery of this technique. They are also an effective tool to show consistent and safe delivery on a more frequent basis or to determine doses to other critical structures as required.« less
Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A
2016-01-01
Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components.
NASA Astrophysics Data System (ADS)
Cho, Ilje; Jung, Taehyun; Zhao, Guang-Yao; Akiyama, Kazunori; Sawada-Satoh, Satoko; Kino, Motoki; Byun, Do-Young; Sohn, Bong Won; Shibata, Katsunori M.; Hirota, Tomoya; Niinuma, Kotaro; Yonekura, Yoshinori; Fujisawa, Kenta; Oyama, Tomoaki
2017-12-01
We present the results of a comparative study of amplitude calibrations for the East Asia VLBI Network (EAVN) at 22 and 43 GHz using two different methods of an "a priori" and a "template spectrum", particularly on lower declination sources. Using observational data sets of early EAVN observations, we investigated the elevation-dependence of the gain values at seven stations of the KaVA (KVN and VERA Array) and three additional telescopes in Japan (Takahagi 32 m, Yamaguchi 32 m, and Nobeyama 45 m). By comparing the independently obtained gain values based on these two methods, we found that the gain values from each method were consistent within 10% at elevations higher than 10°. We also found that the total flux densities of two images produced from the different amplitude calibrations were in agreement within 10% at both 22 and 43 GHz. By using the template spectrum method, furthermore, the additional radio telescopes can participate in KaVA (i.e., EAVN), giving a notable sensitivity increase. Therefore, our results will constrain the detailed conditions in order to measure the VLBI amplitude reliably using EAVN, and discuss the potential of possible expansion to telescopes comprising EAVN.
2017-01-01
Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations. PMID:28644863
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Yang, Bo; Fu, Xiouhua
2007-01-01
The popular method of presenting wavenumber-frequency power spectrum diagrams for studying tropical large-scale waves in the literature is shown to give an incomplete presentation of these waves. The so-called "convectively-coupled Kelvin (mixed Rossby-gravity) waves" are presented as existing only in the symmetric (antisymmetric) component of the diagrams. This is obviously not consistent with the published composite/regression studies of "convectively-coupled Kelvin waves," which illustrate the asymmetric nature of these waves. The cause of this inconsistency is revealed in this note and a revised method of presenting the power spectrum diagrams is proposed. When this revised method is used, "convectively-coupled Kelvin waves" do show anti-symmetric components, and "convectively-coupled mixed Rossby-gravity waves (also known as Yanai waves)" do show a hint of symmetric components. These results bolster a published proposal that these waves be called "chimeric Kelvin waves," "chimeric mixed Rossby-gravity waves," etc. This revised method of presenting power spectrum diagrams offers a more rigorous means of comparing the General Circulation Models (GCM) output with observations by calling attention to the capability of GCMs in correctly simulating the asymmetric characteristics of the equatorial waves.
Beam shaping with vectorial vortex beams under low numerical aperture illumination condition
NASA Astrophysics Data System (ADS)
Dai, Jianning; Zhan, Qiwen
2008-08-01
In this paper we propose and demonstrate a novel beam shaping method using vectorial vortex beam. A vectorial vortex beam is laser beam with polarization singularity in the beam cross section. This type of beams can be decomposed into two orthogonally polarized components. Each of the polarized components could have different vortex characteristics, and consequently, different intensity distribution when focused by lens. Beam shaping in the far field can be achieved by adjusting the relative weighing of these two components. As one example, we study the vectorial vortex that consists of a linearly polarized Gaussian component and a vortex component polarized orthogonally. When such a vectorial vortex beam is focus by low NA lens, the Gaussian component gives rise to a focal intensity distribution with a solid centre while the vortex component gives rise to a donut distribution with hollow dark center. The shape of the focus can be continuously varied by continuously adjusting the relative weight of the two components. Under appropriate conditions, flat top focusing can be obtained. We experimentally demonstrate the creation of such beams with a liquid crystal spatial light modulator. Flattop focus obtained by vectorial vortex beams with topological charge of +1 has been obtained.
A bi-objective integer programming model for partly-restricted flight departure scheduling
Guan, Wei; Zhang, Wenyi; Jiang, Shixiong; Fan, Lingling
2018-01-01
The normal studies on air traffic departure scheduling problem (DSP) mainly deal with an independent airport in which the departure traffic is not affected by surrounded airports, which, however, is not a consistent case. In reality, there still exist cases where several commercial airports are closely located and one of them possesses a higher priority. During the peak hours, the departure activities of the lower-priority airports are usually required to give way to those of higher-priority airport. These giving-way requirements can inflict a set of changes on the modeling of departure scheduling problem with respect to the lower-priority airports. To the best of our knowledge, studies on DSP under this condition are scarce. Accordingly, this paper develops a bi-objective integer programming model to address the flight departure scheduling of the partly-restricted (e.g., lower-priority) one among several adjacent airports. An adapted tabu search algorithm is designed to solve the current problem. It is demonstrated from the case study of Tianjin Binhai International Airport in China that the proposed method can obviously improve the operation efficiency, while still realizing superior equity and regularity among restricted flows. PMID:29715299
Hufnagel, S; Harbison, K; Silva, J; Mettala, E
1994-01-01
This paper describes a new method for the evolutionary determination of user requirements and system specifications called scenario-based engineering process (SEP). Health care professional workstations are critical components of large scale health care system architectures. We suggest that domain-specific software architectures (DSSAs) be used to specify standard interfaces and protocols for reusable software components throughout those architectures, including workstations. We encourage the use of engineering principles and abstraction mechanisms. Engineering principles are flexible guidelines, adaptable to particular situations. Abstraction mechanisms are simplifications for management of complexity. We recommend object-oriented design principles, graphical structural specifications, and formal components' behavioral specifications. We give an ambulatory care scenario and associated models to demonstrate SEP. The scenario uses health care terminology and gives patients' and health care providers' system views. Our goal is to have a threefold benefit. (i) Scenario view abstractions provide consistent interdisciplinary communications. (ii) Hierarchical object-oriented structures provide useful abstractions for reuse, understandability, and long term evolution. (iii) SEP and health care DSSA integration into computer aided software engineering (CASE) environments. These environments should support rapid construction and certification of individualized systems, from reuse libraries.
A bi-objective integer programming model for partly-restricted flight departure scheduling.
Zhong, Han; Guan, Wei; Zhang, Wenyi; Jiang, Shixiong; Fan, Lingling
2018-01-01
The normal studies on air traffic departure scheduling problem (DSP) mainly deal with an independent airport in which the departure traffic is not affected by surrounded airports, which, however, is not a consistent case. In reality, there still exist cases where several commercial airports are closely located and one of them possesses a higher priority. During the peak hours, the departure activities of the lower-priority airports are usually required to give way to those of higher-priority airport. These giving-way requirements can inflict a set of changes on the modeling of departure scheduling problem with respect to the lower-priority airports. To the best of our knowledge, studies on DSP under this condition are scarce. Accordingly, this paper develops a bi-objective integer programming model to address the flight departure scheduling of the partly-restricted (e.g., lower-priority) one among several adjacent airports. An adapted tabu search algorithm is designed to solve the current problem. It is demonstrated from the case study of Tianjin Binhai International Airport in China that the proposed method can obviously improve the operation efficiency, while still realizing superior equity and regularity among restricted flows.
Primordial gravitational waves in a quantum model of big bounce
NASA Astrophysics Data System (ADS)
Bergeron, Hervé; Gazeau, Jean Pierre; Małkiewicz, Przemysław
2018-05-01
We quantise and solve the dynamics of gravitational waves in a quantum Friedmann-Lemaitre-Robertson-Walker spacetime filled with perfect fluid. The classical model is formulated canonically. The Hamiltonian constraint is de-parametrised by setting a fluid variable as the internal clock. The obtained reduced (i.e. physical) phase space is then quantised. Our quantisation procedure is implemented in accordance with two different phase space symmetries, namely, the Weyl-Heisenberg symmetry for the perturbation variables, and the affine symmetry for the background variables. As an appealing outcome, the initial singularity is removed and replaced with a quantum bounce. The quantum model depends on a free parameter that is naturally induced from quantisation and determines the scale of the bounce. We study the dynamics of the quantised gravitational waves across the bounce through three different methods ("thin-horizon", analytical and numerical) which give consistent results and we determine the primordial power spectrum for the case of radiation-dominated universe. Next, we use the instantaneous radiation-matter transition transfer function to make approximate predictions for late universe and constrain our model with LIGO and Planck data. We also give an estimate of the quantum uncertainties in the present-day universe.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
28 CFR 42.208 - Notice of noncompliance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Act (APA) if: (1) The agency gives all interested parties opportunity for— (i) The submission and... consistent with the APA, it shall presume the APA procedures were applied, and send notification under § 42...
42 CFR 438.12 - Provider discrimination prohibited.
Code of Federal Regulations, 2012 CFR
2012-10-01
... designed to maintain quality of services and control costs and are consistent with its responsibilities to..., or PAHP declines to include individual or groups of providers in its network, it must give the...
42 CFR 438.12 - Provider discrimination prohibited.
Code of Federal Regulations, 2014 CFR
2014-10-01
... designed to maintain quality of services and control costs and are consistent with its responsibilities to..., or PAHP declines to include individual or groups of providers in its network, it must give the...
Garcia, Amee M; Determan, John J; Janesko, Benjamin G
2014-05-08
Substituent effects on the π-π interactions of aromatic rings are a topic of much recent debate. Real substituents give a complicated combination of inductive, resonant, dispersion, and other effects. To help partition these effects, we present calculations on fictitious "pure" σ donor/acceptor substituents, hydrogen atoms with nuclear charges other than 1. "Pure" σ donors with nuclear charge <1 weaken π-π stacking in the sandwich benzene dimer. This result is consistent with the electrostatic model of Hunter and Sanders, and different from real substituents. Calculated inductive effects are largely additive and transferable, consistent with a local direct interaction model. A second series of fictitious substituents, neutral hydrogen atoms with an artificially broadened nuclear charge distribution, give similar trends though with reduced additivity. These results provide an alternative perspective on substituent effects in noncovalent interactions.
NASA Technical Reports Server (NTRS)
Khazanov, G. V.; Gamayunov, K. V.; Jordanova, V. K.; Krivorutsky, E. N.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Initial results from the new developed model of the interacting ring current ions and ion cyclotron waves are presented. The model described by the system of two bound kinetic equations: one equation describes the ring current ion dynamics, and another one gives wave evolution. Such system gives a self-consistent description of the ring current ions and ion cyclotron waves in a quasilinear approach. Calculating ion-wave relationships, on a global scale under non steady-state conditions during May 2-5, 1998 storm, we presented the data at three time cuts around initial, main, and late recovery phases of May 4, 1998 storm phase. The structure and dynamics of the ring current proton precipitating flux regions and the wave active ones are discussed in detail.
Self-consistent Formulation of EBW Excitation by Mode Conversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bers, Abraham; Decker, Joan
2005-09-26
Based upon a FLR-hydrodynamic formulation for high frequency waves in a collisionless plasma, we formulate the self-consistent, coupled set of ordinary differential equations whose solution gives the mode conversion of O- and/or X-waves at an angle to B0 to electron Bernstein waves (EBW) at the upper-hybrid resonance UHR layer occurring at the edge of an ST plasma.
Comparative study of methods for recognition of an unknown person's action from a video sequence
NASA Astrophysics Data System (ADS)
Hori, Takayuki; Ohya, Jun; Kurumisawa, Jun
2009-02-01
This paper proposes a Tensor Decomposition Based method that can recognize an unknown person's action from a video sequence, where the unknown person is not included in the database (tensor) used for the recognition. The tensor consists of persons, actions and time-series image features. For the observed unknown person's action, one of the actions stored in the tensor is assumed. Using the motion signature obtained from the assumption, the unknown person's actions are synthesized. The actions of one of the persons in the tensor are replaced by the synthesized actions. Then, the core tensor for the replaced tensor is computed. This process is repeated for the actions and persons. For each iteration, the difference between the replaced and original core tensors is computed. The assumption that gives the minimal difference is the action recognition result. For the time-series image features to be stored in the tensor and to be extracted from the observed video sequence, the human body silhouette's contour shape based feature is used. To show the validity of our proposed method, our proposed method is experimentally compared with Nearest Neighbor rule and Principal Component analysis based method. Experiments using 33 persons' seven kinds of action show that our proposed method achieves better recognition accuracies for the seven actions than the other methods.
Efficiently estimating salmon escapement uncertainty using systematically sampled data
Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.
2007-01-01
Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.
Mater semper incertus est: who's your mummy?
D'alton-Harrison, Rita
2014-01-01
In English law, the legal term for father has been given a broad definition but the definition of mother remains rooted in biology with the Roman law principle mater semper certa est (the mother is always certain) remaining the norm. However, motherhood may be acquired through giving birth to a child, by donation of gametes or by caring and nurturing a child so that the identity of the mother is no longer certain particularly in the case of surrogacy arrangements. While the law in the UK may automatically recognise the parental status of a commissioning father in a traditional surrogacy arrangement, the parental status of the commissioning mother is not automatically recognised in either a traditional or a gestational surrogacy arrangement. Thus the maxim mater est quam gestation demonstrat (meaning the mother is demonstrated by gestation) is also not approached consistently in the legal interpretation of parentage or motherhood in surrogacy as against other assisted reproduction methods. This raises questions about the extent to which motherhood should be affected by the method of reproduction and whether the sociological and philosophical concept of motherhood should, in the case of surrogacy, give rise to a new principle of 'mater semper incertus est' (the mother is uncertain). This article will argue that the time has come to move away from a legal definition of 'mother' that is based on biology to one that recognises the different forms of motherhood. © The Author [2014]. Published by Oxford University Press; all rights reserved. For Permissions, please email: journals.permissions@oup.com.
Bilocal expansion of the Borel amplitude and the hadronic tau decay width
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cvetic, Gorazd; Lee, Taekoon
2001-07-01
The singular part of the Borel transform of a QCD amplitude near the infrared renormalon can be expanded in terms of higher order Wilson coefficients of the operators associated with the renormalon. In this paper we observe that this expansion gives nontrivial constraints on the Borel amplitude that can be used to improve the accuracy of the ordinary perturbative expansion of the Borel amplitude. In particular, we consider the Borel transform of the Adler function and its expansion around the first infrared renormalon due to the gluon condensate. Using the next-to-leading order (NLO) Wilson coefficient of the gluon condensate operator,more » we obtain an exact constraint on the Borel amplitude at the first IR renormalon. We then extrapolate, using judiciously chosen conformal transformations and Pade{prime} approximants, the ordinary perturbative expansion of the Borel amplitude in such a way that this constraint is satisfied. This procedure allows us to predict the O({alpha}{sub s}{sup 4}) coefficient of the Adler function, which gives a result consistent with the estimate by Kataev and Starshenko using a completely different method. We then apply this improved Borel amplitude to the tau decay width and obtain the strong coupling constant {alpha}{sub s}(M{sub z}{sup 2})=0.1193{+-}0.0007{sub exp.}{+-}0.0010{sub EW+CKM}{+-}0.0009{sub meth.}{+-}0.0003{sub evol.}. We then compare this result with those of other resummation methods.« less
Robert Frost as Teacher. A Poet's Interpretation of the Teacher's Task.
ERIC Educational Resources Information Center
Larson, Mildred
1979-01-01
Robert Frost's method of teaching is explained. He saw all education as self-education, not something a teacher can give a student. Frost believed freedom to be a necessity and his method gives the student much freedom while also placing a heavy burden of responsibility on him. (Article originally published in 1951.) (AF)
Governing equations for electro-conjugate fluid flow
NASA Astrophysics Data System (ADS)
Hosoda, K.; Takemura, K.; Fukagata, K.; Yokota, S.; Edamura, K.
2013-12-01
An electro-conjugation fluid (ECF) is a kind of dielectric liquid, which generates a powerful flow when high DC voltage is applied with tiny electrodes. This study deals with the derivation of the governing equations for electro-conjugate fluid flow based on the Korteweg-Helmholtz (KH) equation which represents the force in dielectric liquid subjected to high DC voltage. The governing equations consist of the Gauss's law, charge conservation with charge recombination, the KH equation, the continuity equation and the incompressible Navier-Stokes equations. The KH equation consists of coulomb force, dielectric constant gradient force and electrostriction force. The governing equation gives the distribution of electric field, charge density and flow velocity. In this study, direct numerical simulation (DNS) is used in order to get these distribution at arbitrary time. Successive over-relaxation (SOR) method is used in analyzing Gauss's law and constrained interpolation pseudo-particle (CIP) method is used in analyzing charge conservation with charge recombination. The third order Runge-Kutta method and conservative second-order-accurate finite difference method is used in analyzing the Navier-Stokes equations with the KH equation. This study also deals with the measurement of ECF ow generated with a symmetrical pole electrodes pair which are made of 0.3 mm diameter piano wire. Working fluid is FF-1EHA2 which is an ECF family. The flow is observed from the both electrodes, i.e., the flow collides in between the electrodes. The governing equation successfully calculates mean flow velocity in between the collector pole electrode and the colliding region by the numerical simulation.
2012-01-01
The Walt Disney Company has never lost sight of its founder's edict: “Give the public everything you can give them.” From this simple statement, everyone at Disney strives to exceed customer expectations every day. For more than 80 years this singular pursuit of excellence in delivering consistent quality service has earned the Disney organization a world-renowned reputation and ongoing business success. Uncover some of the secrets behind the Disney service culture and processes. In this session, you will examine the time-tested model for delivering world-class Guest service and discover how attention to detail creates a consistent, successful environment for both employees and customers. You can then use these ideas to transform and improve your own organization's delivery of quality service. You will learn how to: Develop an organizational culture that supports consistent delivery of quality service.Evaluate the Disney approach and tailor it to your business.Design quality service standards and processes to raise the level of customer satisfaction.Create metrics to gauge the needs, perceptions and expectations of your customers.Enable employees, settings and processes to convey your quality service commitment.Implement a strategic plan for monitoring the delivery of seamless customer experiences.
Exploring Women’s Personal Experiences of Giving Birth in Gonabad City: A Qualitative Study
Askari, Fariba; Atarodi, Alireza; Torabi, Shirin; Moshki, Mahdi
2014-01-01
Background: Women’s health is an important task in society. The aim of this qualitative study that used a phenomenological approach was to explain women’s personal experiences of giving birth in Gonabad city that had positive experiences of giving birth in order to establish quality cares and the related factors of midwifery cares for this physiological phenomenon. Methods: The participants were 21 primiparae women who gave a normal and or uncomplicated giving birth in the hospital of Gonabad University of medical sciences. Based on a purposeful approach in-depth interviews were continued to reach data saturation. The data were collected through open and semi-structured interactional in-depth interviews with all the participants. All the interviews were taped, transcribed and then analyzed through a qualitative content analysis method to identify the concepts and themes. Findings: Some categories were emerged. A quiet and safe environment was the most urgent need of the most women giving birth. Unnecessary routine interventions that are performed on all women regardless of their needs and should be avoided were considered such as: “absolute rest, establishing vein, frequent vaginal examinations, fasting and early Amniotomy”. All the women wanted to take part actively in their giving birth, because they believed it could affect their giving birth. Conclusion: We hope that the women’s experiences of giving birth will be a pleasant and enjoyable experience for all the mothers giving birth. PMID:25168980
Contamination in the MACHO data set and the puzzle of Large Magellanic Cloud microlensing
NASA Astrophysics Data System (ADS)
Griest, Kim; Thomas, Christian L.
2005-05-01
In a recent series of three papers, Belokurov, Evans & Le Du and Evans & Belokurov reanalysed the MACHO collaboration data and gave alternative sets of microlensing events and an alternative optical depth to microlensing towards the Large Magellanic Cloud (LMC). Although these authors examined less than 0.2 per cent of the data, they reported that by using a neural net program they had reliably selected a better (and smaller) set of microlensing candidates. Estimating the optical depth from this smaller set, they claimed that the MACHO collaboration overestimated the optical depth by a significant factor and that the MACHO microlensing experiment is consistent with lensing by known stars in the Milky Way and LMC. As we show below, the analysis by these authors contains several errors, and as a result their conclusions are incorrect. Their efficiency analysis is in error, and since they did not search through the entire MACHO data set, they do not know how many microlensing events their neural net would find in the data nor what optical depth their method would give. Examination of their selected events suggests that their method misses low signal-to-noise ratio events and thus would have lower efficiency than the MACHO selection criteria. In addition, their method is likely to give many more false positives (non-lensing events identified as lensing). Both effects would increase their estimated optical depth. Finally, we note that the EROS discovery that LMC event 23 is a variable star reduces the MACHO collaboration estimates of optical depth and the Macho halo fraction by around 8 per cent, and does open the question of additional contamination.
Xie, Yuanlong; Tang, Xiaoqi; Song, Bao; Zhou, Xiangdong; Guo, Yixuan
2018-04-01
In this paper, data-driven adaptive fractional order proportional integral (AFOPI) control is presented for permanent magnet synchronous motor (PMSM) servo system perturbed by measurement noise and data dropouts. The proposed method directly exploits the closed-loop process data for the AFOPI controller design under unknown noise distribution and data missing probability. Firstly, the proposed method constructs the AFOPI controller tuning problem as a parameter identification problem using the modified l p norm virtual reference feedback tuning (VRFT). Then, iteratively reweighted least squares is integrated into the l p norm VRFT to give a consistent compensation solution for the AFOPI controller. The measurement noise and data dropouts are estimated and eliminated by feedback compensation periodically, so that the AFOPI controller is updated online to accommodate the time-varying operating conditions. Moreover, the convergence and stability are guaranteed by mathematical analysis. Finally, the effectiveness of the proposed method is demonstrated both on simulations and experiments implemented on a practical PMSM servo system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
REDUCING AMBIGUITY IN THE FUNCTIONAL ASSESSMENT OF PROBLEM BEHAVIOR
Rooker, Griffin W.; DeLeon, Iser G.; Borrero, Carrie S. W.; Frank-Crawford, Michelle A.; Roscoe, Eileen M.
2015-01-01
Severe problem behavior (e.g., self-injury and aggression) remains among the most serious challenges for the habilitation of persons with intellectual disabilities and is a significant obstacle to community integration. The current standard of behavior analytic treatment for problem behavior in this population consists of a functional assessment and treatment model. Within that model, the first step is to assess the behavior–environment relations that give rise to and maintain problem behavior, a functional behavioral assessment. Conventional methods of assessing behavioral function include indirect, descriptive, and experimental assessments of problem behavior. Clinical investigators have produced a rich literature demonstrating the relative effectiveness for each method, but in clinical practice, each can produce ambiguous or difficult-to-interpret outcomes that may impede treatment development. This paper outlines potential sources of variability in assessment outcomes and then reviews the evidence on strategies for avoiding ambiguous outcomes and/or clarifying initially ambiguous results. The end result for each assessment method is a set of best practice guidelines, given the available evidence, for conducting the initial assessment. PMID:26236145
NASA Astrophysics Data System (ADS)
Zhu, Minjie; Scott, Michael H.
2017-07-01
Accurate and efficient response sensitivities for fluid-structure interaction (FSI) simulations are important for assessing the uncertain response of coastal and off-shore structures to hydrodynamic loading. To compute gradients efficiently via the direct differentiation method (DDM) for the fully incompressible fluid formulation, approximations of the sensitivity equations are necessary, leading to inaccuracies of the computed gradients when the geometry of the fluid mesh changes rapidly between successive time steps or the fluid viscosity is nonzero. To maintain accuracy of the sensitivity computations, a quasi-incompressible fluid is assumed for the response analysis of FSI using the particle finite element method and DDM is applied to this formulation, resulting in linearized equations for the response sensitivity that are consistent with those used to compute the response. Both the response and the response sensitivity can be solved using the same unified fractional step method. FSI simulations show that although the response using the quasi-incompressible and incompressible fluid formulations is similar, only the quasi-incompressible approach gives accurate response sensitivity for viscous, turbulent flows regardless of time step size.
Development of dynamic Bayesian models for web application test management
NASA Astrophysics Data System (ADS)
Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.
2018-03-01
The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.
Heyndrickx, Wouter; Salvador, Pedro; Bultinck, Patrick; Solà, Miquel; Matito, Eduard
2011-02-01
Several definitions of an atom in a molecule (AIM) in three-dimensional (3D) space, including both fuzzy and disjoint domains, are used to calculate electron sharing indices (ESI) and related electronic aromaticity measures, namely, I(ring) and multicenter indices (MCI), for a wide set of cyclic planar aromatic and nonaromatic molecules of different ring size. The results obtained using the recent iterative Hirshfeld scheme are compared with those derived from the classical Hirshfeld method and from Bader's quantum theory of atoms in molecules. For bonded atoms, all methods yield ESI values in very good agreement, especially for C-C interactions. In the case of nonbonded interactions, there are relevant deviations, particularly between fuzzy and QTAIM schemes. These discrepancies directly translate into significant differences in the values and the trends of the aromaticity indices. In particular, the chemically expected trends are more consistently found when using disjoint domains. Careful examination of the underlying effects reveals the different reasons why the aromaticity indices investigated give the expected results for binary divisions of 3D space. Copyright © 2010 Wiley Periodicals, Inc.
Yang, S A
2002-10-01
This paper presents an effective solution method for predicting acoustic radiation and scattering fields in two dimensions. The difficulty of the fictitious characteristic frequency is overcome by incorporating an auxiliary interior surface that satisfies certain boundary condition into the body surface. This process gives rise to a set of uniquely solvable boundary integral equations. Distributing monopoles with unknown strengths over the body and interior surfaces yields the simple source formulation. The modified boundary integral equations are further transformed to ordinary ones that contain nonsingular kernels only. This implementation allows direct application of standard quadrature formulas over the entire integration domain; that is, the collocation points are exactly the positions at which the integration points are located. Selecting the interior surface is an easy task. Moreover, only a few corresponding interior nodal points are sufficient for the computation. Numerical calculations consist of the acoustic radiation and scattering by acoustically hard elliptic and rectangular cylinders. Comparisons with analytical solutions are made. Numerical results demonstrate the efficiency and accuracy of the current solution method.
Screening and clustering of sparse regressions with finite non-Gaussian mixtures.
Zhang, Jian
2017-06-01
This article proposes a method to address the problem that can arise when covariates in a regression setting are not Gaussian, which may give rise to approximately mixture-distributed errors, or when a true mixture of regressions produced the data. The method begins with non-Gaussian mixture-based marginal variable screening, followed by fitting a full but relatively smaller mixture regression model to the selected data with help of a new penalization scheme. Under certain regularity conditions, the new screening procedure is shown to possess a sure screening property even when the population is heterogeneous. We further prove that there exists an elbow point in the associated scree plot which results in a consistent estimator of the set of active covariates in the model. By simulations, we demonstrate that the new procedure can substantially improve the performance of the existing procedures in the content of variable screening and data clustering. By applying the proposed procedure to motif data analysis in molecular biology, we demonstrate that the new method holds promise in practice. © 2016, The International Biometric Society.
Estimation of High-Dimensional Graphical Models Using Regularized Score Matching
Lin, Lina; Drton, Mathias; Shojaie, Ali
2017-01-01
Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498
NASA Astrophysics Data System (ADS)
Nagaraja, Padmarajaiah; Avinash, Krishnegowda; Shivakumar, Anantharaman; Dinesh, Rangappa; Shrestha, Ashwinee Kumar
2010-11-01
We here describe a new spectrophotometric method for measuring total bilirubin in serum. The method is based on the cleavage of bilirubin giving formaldehyde which further reacts with diazotized 3-methyl-2-benzothiazolinone hydrazone hydrochloride giving blue colored solution with maximum absorbance at 630 nm. Sensitivity of the developed method was compared with Jendrassik-Grof assay procedure and its applicability has been tested with human serum samples. Good correlation was attained between both methods giving slope of 0.994, intercept 0.015, and R2 = 0.997. Beers law obeyed in the range of 0.068-17.2 μM with good linearity, absorbance y = 0.044 Cbil + 0.003. Relative standard deviation was 0.006872, within day precision ranged 0.3-1.2% and day-to-day precision ranged 1-6%. Recovery of the method varied from 97 to 102%. The proposed method has higher sensitivity with less interference. The obtained product was extracted and was spectrally characterized for structural confirmation with FT-IR, 1H NMR.
Weyl consistency conditions in non-relativistic quantum field theory
Pal, Sridip; Grinstein, Benjamín
2016-12-05
Weyl consistency conditions have been used in unitary relativistic quantum field theory to impose constraints on the renormalization group flow of certain quantities. We classify the Weyl anomalies and their renormalization scheme ambiguities for generic non-relativistic theories in 2 + 1 dimensions with anisotropic scaling exponent z = 2; the extension to other values of z are discussed as well. We give the consistency conditions among these anomalies. As an application we find several candidates for a C-theorem. Here, we comment on possible candidates for a C-theorem in higher dimensions.
Common Badging and Access Control System (CBACS)
NASA Technical Reports Server (NTRS)
Baldridge, Tim
2005-01-01
The goals of the project are: Achieve high business value through a common badging and access control system that integrates with smart cards. Provide physical (versus logical) deployment of smart cards initially. Provides a common consistent and reliable environment into which to release the smart card. Gives opportunity to develop agency-wide consistent processes, practices and policies. Enables enterprise data capture and management. Promotes data validation prior to SC issuance.
Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems
NASA Astrophysics Data System (ADS)
Razzak, M. A.; Alam, M. Z.; Sharif, M. N.
2018-03-01
In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.
Installation and management of the SPS and LEP control system computers
NASA Astrophysics Data System (ADS)
Bland, Alastair
1994-12-01
Control of the CERN SPS and LEP accelerators and service equipment on the two CERN main sites is performed via workstations, file servers, Process Control Assemblies (PCAs) and Device Stub Controllers (DSCs). This paper describes the methods and tools that have been developed to manage the file servers, PCAs and DSCs since the LEP startup in 1989. There are five operational DECstation 5000s used as file servers and boot servers for the PCAs and DSCs. The PCAs consist of 90 SCO Xenix 386 PCs, 40 LynxOS 486 PCs and more than 40 older NORD 100s. The DSCs consist of 90 OS-968030 VME crates and 10 LynxOS 68030 VME crates. In addition there are over 100 development systems. The controls group is responsible for installing the computers, starting all the user processes and ensuring that the computers and the processes run correctly. The operators in the SPS/LEP control room and the Services control room have a Motif-based X window program which gives them, in real time, the state of all the computers and allows them to solve problems or reboot them.
NASA Astrophysics Data System (ADS)
Qiao, Bin; He, X. T.; Zhu, Shao-ping; Zheng, C. Y.
2005-08-01
The acceleration of plasma electron in intense laser-plasma interaction is investigated analytically and numerically, where the conjunct effect of laser fields and self-consistent spontaneous fields (including quasistatic electric field Esl, azimuthal quasistatic magnetic field Bsθ and the axial one Bsz) is completely considered for the first time. An analytical relativistic electron fluid model using test-particle method has been developed to give an explicit analysis about the effects of each quasistatic fields. The ponderomotive accelerating and scattering effects on electrons are partly offset by Esl, furthermore, Bsθ pinches and Bsz collimates electrons along the laser axis. The dependences of energy gain and scattering angle of electron on its initial radial position, plasma density, and laser intensity are, respectively, studied. The qualities of the relativistic electron beam (REB), such as energy spread, beam divergence, and emitting (scattering) angle, generated by both circularly polarized (CP) and linearly polarized (LP) lasers are studied. Results show CP laser is of clear advantage comparing to LP laser for it can generate a better REB in collimation and stabilization.
Implementation of Consolidated HIS: Improving Quality and Efficiency of Healthcare
Choi, Jinwook; Seo, Jeong-Wook; Chung, Chun Kee; Kim, Kyung-Hwan; Kim, Ju Han; Kim, Jong Hyo; Chie, Eui Kyu; Cho, Hyun-Jai; Goo, Jin Mo; Lee, Hyuk-Joon; Wee, Won Ryang; Nam, Sang Mo; Lim, Mi-Sun; Kim, Young-Ah; Yang, Seung Hoon; Jo, Eun Mi; Hwang, Min-A; Kim, Wan Suk; Lee, Eun Hye; Choi, Su Hi
2010-01-01
Objectives Adoption of hospital information systems offers distinctive advantages in healthcare delivery. First, implementation of consolidated hospital information system in Seoul National University Hospital led to significant improvements in quality of healthcare and efficiency of hospital management. Methods The hospital information system in Seoul National University Hospital consists of component applications: clinical information systems, clinical research support systems, administrative information systems, management information systems, education support systems, and referral systems that operate to generate utmost performance when delivering healthcare services. Results Clinical information systems, which consist of such applications as electronic medical records, picture archiving and communication systems, primarily support clinical activities. Clinical research support system provides valuable resources supporting various aspects of clinical activities, ranging from management of clinical laboratory tests to establishing care-giving procedures. Conclusions Seoul National University Hospital strives to move its hospital information system to a whole new level, which enables customized healthcare service and fulfills individual requirements. The current information strategy is being formulated as an initial step of development, promoting the establishment of next-generation hospital information system. PMID:21818449
Fast Time Response Electromagnetic Particle Injection System for Disruption Mitigation
NASA Astrophysics Data System (ADS)
Raman, Roger; Lay, W.-S.; Jarboe, T. R.; Menard, J. E.; Ono, M.
2017-10-01
Predicting and controlling disruptions is an urgent issue for ITER. In this proposed method, a radiative payload consisting of micro spheres of Be, BN, B, or other acceptable low-Z materials would be injected inside the q =2 surface for thermal and runaway electron mitigation. The radiative payload would be accelerated to the required velocities (0.2 to >1km/s) in an Electromagnetic Particle Injector (EPI). An important advantage of the EPI system is that it could be positioned very close to the reactor vessel. This has the added benefit that the external field near a high-field tokamak dramatically improves the injector performance, while simultaneously reducing the system response time. A NSTX-U / DIII-D scale system has been tested off-line to verify the critical parameters - the projected system response time and attainable velocities. Both are consistent with the model calculations, giving confidence that an ITER-scale system could be built to ensure safety of the ITER device. This work is supported by U.S. DOE Contracts: DE-AC02-09CH11466, DE-FG02-99ER54519 AM08, and DE-SC0006757.
General mechanism of two-state protein folding kinetics.
Rollins, Geoffrey C; Dill, Ken A
2014-08-13
We describe here a general model of the kinetic mechanism of protein folding. In the Foldon Funnel Model, proteins fold in units of secondary structures, which form sequentially along the folding pathway, stabilized by tertiary interactions. The model predicts that the free energy landscape has a volcano shape, rather than a simple funnel, that folding is two-state (single-exponential) when secondary structures are intrinsically unstable, and that each structure along the folding path is a transition state for the previous structure. It shows how sequential pathways are consistent with multiple stochastic routes on funnel landscapes, and it gives good agreement with the 9 order of magnitude dependence of folding rates on protein size for a set of 93 proteins, at the same time it is consistent with the near independence of folding equilibrium constant on size. This model gives estimates of folding rates of proteomes, leading to a median folding time in Escherichia coli of about 5 s.
Familial cancer syndromes and clusters.
Birch, J M
1994-07-01
The study of rare families in which a variety of cancers occur, usually at an early age and with patterns consistent with a common hereditary mechanism, has contributed much to our understanding of the process of carcinogenesis. So far, genes identified as having a role in cancer predisposition in these families have also been important in the histogenesis of sporadic cancers. In the two most clearly defined cancer family syndromes, the Li-Fraumeni syndrome and Lynch syndrome II, the genes involved predispose to diverse but specific constellations of cancers. Genes associated with site-specific familial cancer clusters may also give rise to increased susceptibility to other cancers, and site-specific clusters may represent one end of a spectrum. A consistent feature of familial cancer syndromes is the variable expression within and between families. A challenge for the future will be to determine other factors which may interact with the principal genes involved, giving rise to this variability.
Social support and ambulatory blood pressure: an examination of both receiving and giving.
Piferi, Rachel L; Lawler, Kathleen A
2006-11-01
The relationship between the social network and physical health has been studied extensively and it has consistently been shown that individuals live longer, have fewer physical symptoms of illness, and have lower blood pressure when they are a member of a social network than when they are isolated. Much of the research has focused on the benefits of receiving social support from the network and the effects of giving to others within the network have been neglected. The goal of the present research was to systematically investigate the relationship between giving and ambulatory blood pressure. Systolic blood pressure, diastolic blood pressure, mean arterial pressure, and heart rate were recorded every 30 min during the day and every 60 min at night during a 24-h period. Linear mixed models analyses revealed that lower systolic and diastolic blood pressure and mean arterial pressure were related to giving social support. Furthermore, correlational analyses revealed that participants with a higher tendency to give social support reported greater received social support, greater self-efficacy, greater self-esteem, less depression, and less stress than participants with a lower tendency to give social support to others. Structural equation modeling was also used to test a proposed model that giving and receiving social support represent separate pathways predicting blood pressure and health. From this study, it appears that giving social support may represent a unique construct from receiving social support and may exert a unique effect on health.
Salapovic, Helena; Geier, Johannes; Reznicek, Gottfried
2013-01-01
Sesquiterpene lactones (SLs), mainly those with an activated exocyclic methylene group, are important allergens in Asteraceae (Compositae) plants. As a screening tool, the Compositae mix, consisting of five Asteraceae plant extracts with allergenic potential (feverfew, tansy, arnica, yarrow, and German chamomile) is part of several national patch test baseline series. However, the SL content of the Compositae mix may vary due to the source material. Therefore, a simple spectrophotometric method for the quantitative measurement of SLs with the α-methylene-γ-butyrolactone moiety was developed, giving the percentage of allergenic compounds in plant extracts. The method has been validated and five Asteraceae extracts, namely feverfew (Tanacetum parthenium L.), tansy (Tanacetum vulgare L.), arnica (Arnica montana L.), yarrow (Achillea millefolium L.), and German chamomile (Chamomilla recutita L. Rauschert) that have been used in routine patch test screening were evaluated. A good correlation could be found between the results obtained using the proposed spectrophotometric method and the corresponding clinical results. Thus, the introduced method is a valuable tool for evaluating the allergenic potential and for the simple and efficient quality control of plant extracts with allergenic potential. PMID:24106675
Measuring NMHC and NMOG emissions from motor vehicles via FTIR spectroscopy
NASA Astrophysics Data System (ADS)
Gierczak, Christine A.; Kralik, Lora L.; Mauti, Adolfo; Harwell, Amy L.; Maricq, M. Matti
2017-02-01
The determination of non-methane organic gases (NMOG) emissions according to United States Environmental Protection Agency (EPA) regulations is currently a multi-step process requiring separate measurement of various emissions components by a number of independent on-line and off-line techniques. The Fourier transform infrared spectroscopy (FTIR) method described in this paper records all required components using a single instrument. It gives data consistent with the regulatory method, greatly simplifies the process, and provides second by second time resolution. Non-methane hydrocarbons (NMHCs) are measured by identifying a group of hydrocarbons, including oxygenated species, that serve as a surrogate for this class, the members of which are dynamically included if they are present in the exhaust above predetermined threshold levels. This yields an FTIR equivalent measure of NMHC that correlates within 5% to the regulatory flame ionization detection (FID) method. NMOG is then determined per regulatory calculation solely from FTIR recorded emissions of NMHC, ethanol, acetaldehyde, and formaldehyde, yielding emission rates that also correlate within 5% with the reference method. Examples are presented to show how the resulting time resolved data benefit aftertreatment development for light duty vehicles.
NASA Astrophysics Data System (ADS)
Gillies, D. M.; Knudsen, D. J.; Donovan, E.; Jackel, B. J.; Gillies, R.; Spanswick, E.
2017-12-01
We compare field-aligned currents (FACs) measured by the Swarm constellation of satellites with the location of red-line (630 nm) auroral arcs observed by all-sky imagers (ASIs) to derive a characteristic emission height for the optical emissions. In our 10 events we find that an altitude of 200 km applied to the ASI maps gives optimal agreement between the two observations. We also compare the new FAC method against the traditional triangulation method using pairs of all-sky imagers (ASIs), and against electron density profiles obtained from the Resolute Bay Incoherent Scatter Radar-Canadian radar (RISR-C), both of which are consistent with a characteristic emission height of 200 km. We also present the spatial error associated with georeferencing REdline Geospace Observatory (REGO) and THEMIS all-sky imagers (ASIs) and how it applies to altitude projections of the mapped image. Utilizing this error we validate the estimated altitude of redline aurora using two methods: triangulation between ASIs and field-aligned current profiles derived from magnetometers on-board the Swarm satellites.
NASA Astrophysics Data System (ADS)
Feng, Chi; UCNb Collaboration
2011-10-01
It is theorized that contributions to the Fierz interference term from scalar interaction beyond the Standard Model could be detectable in the spectrum of neutron beta-decay. The UCNb experiment run at the Los Alamos Neutron Science Center aims to accurately measure the neutron beta-decay energy spectrum to detect a nonzero interference term. The instrument consists of a cubic ``integrating sphere'' calorimeter attached with up to 4 photomultiplier tubes. The inside of the calorimeter is coated with white paint and a thin UV scintillating layer made of deuterated polystyrene to contain the ultracold neutrons. A Monte Carlo simulation using the Geant4 toolkit is developed in order to provide an accurate method of energy reconstruction. Offline calibration with the Kellogg Radiation Laboratory 140 keV electron gun and conversion electron sources will be used to validate the Monte Carlo simulation to give confidence in the energy reconstruction methods and to better understand systematics in the experiment data.
Time dependent inflow-outflow boundary conditions for 2D acoustic systems
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Myers, Michael K.
1989-01-01
An analysis of the number and form of the required inflow-outflow boundary conditions for the full two-dimensional time-dependent nonlinear acoustic system in subsonic mean flow is performed. The explicit predictor-corrector method of MacCormack (1969) is used. The methodology is tested on both uniform and sheared mean flows with plane and nonplanar sources. Results show that the acoustic system requires three physical boundary conditions on the inflow and one on the outflow boundary. The most natural choice for the inflow boundary conditions is judged to be a specification of the vorticity, the normal acoustic impedance, and a pressure gradient-density gradient relationship normal to the boundary. Specification of the acoustic pressure at the outflow boundary along with these inflow boundary conditions is found to give consistent reliable results. A set of boundary conditions developed earlier, which were intended to be nonreflecting is tested using the current method and is shown to yield unstable results for nonplanar acoustic waves.
Knowledge acquisition and learning process description in context of e-learning
NASA Astrophysics Data System (ADS)
Kiselev, B. G.; Yakutenko, V. A.; Yuriev, M. A.
2017-01-01
This paper investigates the problem of design of e-learning and MOOC systems. It describes instructional design-based approaches to e-learning systems design: IMS Learning Design, MISA and TELOS. To solve this problem we present Knowledge Field of Educational Environment with Competence boundary conditions - instructional engineering method for self-learning systems design. It is based on the simplified TELOS approach and enables a user to create their individual learning path by choosing prerequisite and target competencies. The paper provides the ontology model for the described instructional engineering method, real life use cases and the classification of the presented model. Ontology model consists of 13 classes and 15 properties. Some of them are inherited from Knowledge Field of Educational Environment and some are new and describe competence boundary conditions and knowledge validation objects. Ontology model uses logical constraints and is described using OWL 2 standard. To give TELOS users better understanding of our approach we list mapping between TELOS and KFEEC.
Synthesis of substantially monodispersed colloids
NASA Technical Reports Server (NTRS)
Stoeva, Savka (Inventor); Klabunde, Kenneth J. (Inventor); Sorensen, Christopher (Inventor)
2003-01-01
A method of forming ligated nanoparticles of the formula Y(Z).sub.x where Y is a nanoparticle selected from the group consisting of elemental metals having atomic numbers ranging from 21-34, 39-52, 57-83 and 89-102, all inclusive, the halides, oxides and sulfides of such metals, and the alkali metal and alkaline earth metal halides, and Z represents ligand moieties such as the alkyl thiols. In the method, a first colloidal dispersion is formed made up of nanoparticles solvated in a molar excess of a first solvent (preferably a ketone such as acetone), a second solvent different than the first solvent (preferably an organic aryl solvent such as toluene) and a quantity of ligand moieties; the first solvent is then removed under vacuum and the ligand moieties ligate to the nanoparticles to give a second colloidal dispersion of the ligated nanoparticles solvated in the second solvent. If substantially monodispersed nanoparticles are desired, the second dispersion is subjected to a digestive ripening process. Upon drying, the ligated nanoparticles may form a three-dimensional superlattice structure.
O'Connor, B.L.; Hondzo, Miki; Harvey, J.W.
2009-01-01
Traditionally, dissolved oxygen (DO) fluxes have been calculated using the thin-film theory with DO microstructure data in systems characterized by fine sediments and low velocities. However, recent experimental evidence of fluctuating DO concentrations near the sediment-water interface suggests that turbulence and coherent motions control the mass transfer, and the surface renewal theory gives a more mechanistic model for quantifying fluxes. Both models involve quantifying the mass transfer coefficient (k) and the relevant concentration difference (??C). This study compared several empirical models for quantifying k based on both thin-film and surface renewal theories, as well as presents a new method for quantifying ??C (dynamic approach) that is consistent with the observed DO concentration fluctuations near the interface. Data were used from a series of flume experiments that includes both physical and kinetic uptake limitations of the flux. Results indicated that methods for quantifying k and ??C using the surface renewal theory better estimated the DO flux across a range of fluid-flow conditions. ?? 2009 ASCE.
Synthesis and luminescence properties of KSrPO{sub 4}:Eu{sup 2+} phosphor for radiation dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palan, C. B., E-mail: chetanpalan27@yahoo.in; Bajaj, N. S.; Omanwar, S. K.
The KSrPO{sub 4}:Eu phosphor was synthesized via solid state method. The structural and morphological characterizations were done through XRD (X-ray diffraction) and SEM (Scanning Electronic Microscope). Additionally, the photoluminescence (PL), thermoluminescence (TL) and optically Stimulated luminescence (OSL) properties of powder KSrPO{sub 4}:Eu were studied. The PL spectra show blue emission under near UV excitation. It was advocated that KSrPO{sub 4}:Eu phosphor not only show OSL sensitivity (0.47 times) but also gives faster decay in OSL signals than that of Al{sub 2}O{sub 3}:C (BARC) phosphor. The TL glow curve consist of two shoulder peaks and the kinetics parameters such as activationmore » energy and frequency factors were determined by using peak shape method and also photoionization cross-sections of prepared phosphor was calculated. The radiation dosimetry properties such as minimum detectable dose (MDD), dose response and reusability were reported.« less
Resolving the neutron lifetime puzzle
NASA Astrophysics Data System (ADS)
Mumm, Pieter
2018-05-01
Free electrons and protons are stable, but outside atomic nuclei, free neutrons decay into a proton, electron, and antineutrino through the weak interaction, with a lifetime of ∼880 s (see the figure). The most precise measurements have stated uncertainties below 1 s (0.1%), but different techniques, although internally consistent, disagree by 4 standard deviations given the quoted uncertainties. Resolving this “neutron lifetime puzzle” has spawned much experimental effort as well as exotic theoretical mechanisms, thus far without a clear explanation. On page 627 of this issue, Pattie et al. (1) present the most precise measurement of the neutron lifetime to date. A new method of measuring trapped neutrons in situ allows a more detailed exploration of one of the more pernicious systematic effects in neutron traps, neutron phase-space evolution (the changing orbits of neutrons in the trap), than do previous methods. The precision achieved, combined with a very different set of systematic uncertainties, gives hope that experiments such as this one can help resolve the current situation with the neutron lifetime.
Fractal characteristic in the wearing of cutting tool
NASA Astrophysics Data System (ADS)
Mei, Anhua; Wang, Jinghui
1995-11-01
This paper studies the cutting tool wear with fractal geometry. The wearing image of the flank has been collected by machine vision which consists of CCD camera and personal computer. After being processed by means of preserving smoothing, binary making and edge extracting, the clear boundary enclosing the worn area has been obtained. The fractal dimension of the worn surface is calculated by the methods called `Slit Island' and `Profile'. The experiments and calciating give the conclusion that the worn surface is enclosed by a irregular boundary curve with some fractal dimension and characteristics of self-similarity. Furthermore, the relation between the cutting velocity and the fractal dimension of the worn region has been submitted. This paper presents a series of methods for processing and analyzing the fractal information in the blank wear, which can be applied to research the projective relation between the fractal structure and the wear state, and establish the fractal model of the cutting tool wear.
NASA Technical Reports Server (NTRS)
Han, Chia Yung; Wan, Liqun; Wee, William G.
1990-01-01
A knowledge-based interactive problem solving environment called KIPSE1 is presented. The KIPSE1 is a system built on a commercial expert system shell, the KEE system. This environment gives user capability to carry out exploratory data analysis and pattern classification tasks. A good solution often consists of a sequence of steps with a set of methods used at each step. In KIPSE1, solution is represented in the form of a decision tree and each node of the solution tree represents a partial solution to the problem. Many methodologies are provided at each node to the user such that the user can interactively select the method and data sets to test and subsequently examine the results. Otherwise, users are allowed to make decisions at various stages of problem solving to subdivide the problem into smaller subproblems such that a large problem can be handled and a better solution can be found.
NASA Astrophysics Data System (ADS)
Buchalla, Rainer; Begley, Timothy H.
2006-01-01
Low-molecular-weight (low-MW) constituents of polyethylene terephthalate (PET), irradiated with 60Co gamma rays at 25 and 50 kGy, were analyzed by HPLC-MS with atmospheric-pressure chemical ionization (APCI). Consistent with earlier results, the concentrations of the major compounds that are present in the non-irradiated PET do not change perceptibly. However, we find a small but significant increase in terephthalic acid ethylester, from less than 1 mg/kg in the non-irradiated control to ca. 2 mg/kg after 50 kGy, which has not been described before. The finding is important because it gives an impression of the sensitivity of the analytical method. Additionally, it shows that even very radiation-resistant polymers can form measurable amounts of low-MW radiolysis products. The potential and limitations of LC-MS for the analysis of radiolysis products and unidentified migrants are briefly discussed in the context of the question: How can we validate our analytical methods for unknown analytes?
Wortmann, Franz J; Wortmann, Gabriele; Haake, Hans-Martin; Eisfeld, Wolf
2014-01-01
Through measurements of three different hair samples (virgin and treated) by the torsional pendulum method (22°C, 22% RH) a systematic decrease of the torsional storage modulus G' with increasing fiber diameter, i.e., polar moment of inertia, is observed. G' is therefore not a material constant for hair. This change of G' implies a systematic component of data variance, which significantly contributes to the limitations of the torsional method for cosmetic claim support. Fitting the data on the basis of a core/shell model for cortex and cuticle enables to separate this systematic component of variance and to greatly enhance the discriminative power of the test. The fitting procedure also provides values for the torsional storage moduli of the morphological components, confirming that the cuticle modulus is substantially higher than that of the cortex. The results give consistent insight into the changes imparted to the morphological components by the cosmetic treatments.
Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming
2016-07-08
The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
1998-06-29
of some interstitial water during intercalation of the disulfide polymer of DMcT. Elemental analysis gives a composition for the intercalation...the disulfide polymer of DMcT. Elemental analysis gives a composition for the intercalation material of [(polyDMcT)o25*V205.4H20]. The cyclic...13.5 A). This change is consistent with loss of some interstitial water during intercalation of the disulfide polymer of DMcT. Elemental analysis
Forward Field Computation with OpenMEEG
Gramfort, Alexandre; Papadopoulo, Théodore; Olivi, Emmanuel; Clerc, Maureen
2011-01-01
To recover the sources giving rise to electro- and magnetoencephalography in individual measurements, realistic physiological modeling is required, and accurate numerical solutions must be computed. We present OpenMEEG, which solves the electromagnetic forward problem in the quasistatic regime, for head models with piecewise constant conductivity. The core of OpenMEEG consists of the symmetric Boundary Element Method, which is based on an extended Green Representation theorem. OpenMEEG is able to provide lead fields for four different electromagnetic forward problems: Electroencephalography (EEG), Magnetoencephalography (MEG), Electrical Impedance Tomography (EIT), and intracranial electric potentials (IPs). OpenMEEG is open source and multiplatform. It can be used from Python and Matlab in conjunction with toolboxes that solve the inverse problem; its integration within FieldTrip is operational since release 2.0. PMID:21437231
Fingerprinting Communication and Computation on HPC Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peisert, Sean
2010-06-02
How do we identify what is actually running on high-performance computing systems? Names of binaries, dynamic libraries loaded, or other elements in a submission to a batch queue can give clues, but binary names can be changed, and libraries provide limited insight and resolution on the code being run. In this paper, we present a method for"fingerprinting" code running on HPC machines using elements of communication and computation. We then discuss how that fingerprint can be used to determine if the code is consistent with certain other types of codes, what a user usually runs, or what the user requestedmore » an allocation to do. In some cases, our techniques enable us to fingerprint HPC codes using runtime MPI data with a high degree of accuracy.« less
Integrated and differential accuracy in resummed cross sections
Bertolini, Daniele; Solon, Mikhail P.; Walsh, Jonathan R.
2017-03-30
Standard QCD resummation techniques provide precise predictions for the spectrum and the cumulant of a given observable. The integrated spectrum and the cumulant differ by higher-order terms which, however, can be numerically significant. Here in this paper we propose a method, which we call the σ-improved scheme, to resolve this issue. It consists of two steps: (i) include higher-order terms in the spectrum to improve the agreement with the cumulant central value, and (ii) employ profile scales that encode correlations between different points to give robust uncertainty estimates for the integrated spectrum. We provide a generic algorithm for determining suchmore » profile scales, and show the application to the thrust distribution in e +e - collisions at NLL'+NLO and NNLL'+NNLO.« less
Sagalowicz, Laurent; Acquistapace, Simone; Watzke, Heribert J; Michel, Martin
2007-11-20
We developed a method that enables differentiation between liquid crystalline-phase particles corresponding to different space groups. It consists of controlled tilting of the specimen to observe different orientations of the same particle using cryogenic transmission electron microscopy. This leads to the visualization of lattice planes (or reflections) that are present for a given structure and absent for the other one(s) and that give information on liquid crystalline structures and their space groups. In particular, we show that we can unambiguously distinguish among particles having the inverted micellar cubic (space group Fd(3)m, 227), the inverted bicontinuous gyroid (space group Ia(3)d, 230), the inverted bicontinuous diamond (space group Pn(3)m, 224), and the inverted bicontinuous primitive cubic structure (space group Im(3)m, 229).
Numerical simulation of electron scattering by nanotube junctions
NASA Astrophysics Data System (ADS)
Brüning, J.; Grikurov, V. E.
2008-03-01
We demonstrate the possibility of computing the intensity of electronic transport through various junctions of three-dimensional metallic nanotubes. In particular, we observe that the magnetic field can be used to control the switch of electron in Y-type junctions. Keeping in mind the asymptotic modeling of reliable nanostructures by quantum graphs, we conjecture that the scattering matrix of the graph should be the same as the scattering matrix of its nanosize-prototype. The numerical computation of the latter gives a method for determining the "gluing" conditions at a graph. Exploring this conjecture, we show that the Kirchhoff conditions (which are commonly used on graphs) cannot be applied to model reliable junctions. This work is a natural extension of the paper [1], but it is written in a self-consistent manner.
Czech results at criticality dosimetry intercomparison 2002.
Frantisek, Spurný; Jaroslav, Trousil
2004-01-01
Two criticality dosimetry systems were tested by Czech participants during the intercomparison held in Valduc, France, June 2002. The first consisted of the thermoluminescent detectors (TLDs) (Al-P glasses) and Si-diodes as passive neutron dosemeters. Second, it was studied to what extent the individual dosemeters used in the Czech routine personal dosimetry service can give a reliable estimation of criticality accident exposure. It was found that the first system furnishes quite reliable estimation of accidental doses. For routine individual dosimetry system, no important problems were encountered in the case of photon dosemeters (TLDs, film badge). For etched track detectors in contact with the 232Th or 235U-Al alloy, the track density saturation for the spark counting method limits the upper dose at approximately 1 Gy for neutrons with the energy >1 MeV.
Longitudinal analysis of bioaccumulative contaminants in freshwater fishes
Sun, Jielun; Kim, Y.; Schmitt, C.J.
2003-01-01
The National Contaminant Biomonitoring Program (NCBP) was initiated in 1967 as a component of the National Pesticide Monitoring program. It consists of periodic collection of freshwater fish and other samples and the analysis of the concentrations of persistent environmental contaminants in these samples. For the analysis, the common approach has been to apply the mixed two-way ANOVA model to combined data. A main disadvantage of this method is that it cannot give a detailed temporal trend of the concentrations since the data are grouped. In this paper, we present an alternative approach that performs a longitudinal analysis of the information using random effects models. In the new approach, no grouping is needed and the data are treated as samples from continuous stochastic processes, which seems more appropriate than ANOVA for the problem.
Forensic analysis of explosions: Inverse calculation of the charge mass.
van der Voort, M M; van Wees, R M M; Brouwer, S D; van der Jagt-Deutekom, M J; Verreault, J
2015-07-01
Forensic analysis of explosions consists of determining the point of origin, the explosive substance involved, and the charge mass. Within the EU FP7 project Hyperion, TNO developed the Inverse Explosion Analysis (TNO-IEA) tool to estimate the charge mass and point of origin based on observed damage around an explosion. In this paper, inverse models are presented based on two frequently occurring and reliable sources of information: window breakage and building damage. The models have been verified by applying them to the Enschede firework disaster and the Khobar tower attack. Furthermore, a statistical method has been developed to combine the various types of data, in order to determine an overall charge mass distribution. In relatively open environments, like for the Enschede firework disaster, the models generate realistic charge masses that are consistent with values found in forensic literature. The spread predicted by the IEA tool is however larger than presented in the literature for these specific cases. This is also realistic due to the large inherent uncertainties in a forensic analysis. The IEA-models give a reasonable first order estimate of the charge mass in a densely built urban environment, such as for the Khobar tower attack. Due to blast shielding effects which are not taken into account in the IEA tool, this is usually an under prediction. To obtain more accurate predictions, the application of Computational Fluid Dynamics (CFD) simulations is advised. The TNO IEA tool gives unique possibilities to inversely calculate the TNT equivalent charge mass based on a large variety of explosion effects and observations. The IEA tool enables forensic analysts, also those who are not experts on explosion effects, to perform an analysis with a largely reduced effort. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Merabia, Samy; Termentzidis, Konstantinos
2012-09-01
In this article, we compare the results of nonequilibrium (NEMD) and equilibrium (EMD) molecular dynamics methods to compute the thermal conductance at the interface between solids. We propose to probe the thermal conductance using equilibrium simulations measuring the decay of the thermally induced energy fluctuations of each solid. We also show that NEMD and EMD give generally speaking inconsistent results for the thermal conductance: Green-Kubo simulations probe the Landauer conductance between two solids which assumes phonons on both sides of the interface to be at equilibrium. On the other hand, we show that NEMD give access to the out-of-equilibrium interfacial conductance consistent with the interfacial flux describing phonon transport in each solid. The difference may be large and reaches typically a factor 5 for interfaces between usual semiconductors. We analyze finite size effects for the two determinations of the interfacial thermal conductance, and show that the equilibrium simulations suffer from severe size effects as compared to NEMD. We also compare the predictions of the two above-mentioned methods—EMD and NEMD—regarding the interfacial conductance of a series of mass mismatched Lennard-Jones solids. We show that the Kapitza conductance obtained with EMD can be well described using the classical diffuse mismatch model (DMM). On the other hand, NEMD simulation results are consistent with an out-of-equilibrium generalization of the acoustic mismatch model (AMM). These considerations are important in rationalizing previous results obtained using molecular dynamics, and help in pinpointing the physical scattering mechanisms taking place at atomically perfect interfaces between solids, which is a prerequisite to understand interfacial heat transfer across real interfaces.
Yen, Wen-Jiuan; Teng, Ching-Hwa; Huang, Xuan-Yi; Ma, Wei-Fen; Lee, Sheuan; Tseng, Hsiu-Chih
2010-01-01
The aim of this study is to generate a theory of meaning of care-giving for parents of mentally ill children in Taiwan. Studies indicate that the meaning of care-giving plays an important role in the psychological adjustment of care-givers to care-giving. With a positive meaning of care-giving, care-givers can accept their roles and adapt to them more readily. The research employs the qualitative method of grounded theory, the inquiry is based on symbolic interactionism. Twenty parental care-givers of children with schizophrenia were recruited at a private hospital in central Taiwan. Semi-structured interviews were conducted. A comparative method was used to analyse the text and field notes. Responsibility (zeren) emerges as the core category or concept. Responsibility expresses broadly the behavioural principles that are culturally prescribed and centred on familial ethics and values. Related concepts and principles that influence caregiver actions and affections include a return of karma, challenges from local gods and fate. By maintaining their culturally prescribed interpretations of care-giving, parents hope to give care indefinitely without complaints. The findings clearly suggest that the meaning of care-giving is determined through a process of internal debate that is shaped by culturally specific concepts. The paper attempts to explain some of these culturally specific determinants and explanations of care-giving behaviour. The theory contributes knowledge about the meaning of care-giving for parents of mentally ill children in Taiwan. It should be useful reference for mental health professionals, who provide counselling services to ethnically Taiwanese care-givers.
A Method to Determine of All Non-Isomorphic Groups of Order 16
ERIC Educational Resources Information Center
Valcan, Dumitru
2012-01-01
Many students or teachers ask themselves: Being given a natural number n, how many non-isomorphic groups of order n exists? The answer, generally, is not yet given. But, for certain values of the number n have answered this question. The present work gives a method to determine of all non-isomorphic groups of order 16 and gives descriptions of all…
Conformal higher spin theory and twistor space actions
NASA Astrophysics Data System (ADS)
Hähnel, Philipp; McLoughlin, Tristan
2017-12-01
We consider the twistor description of conformal higher spin theories and give twistor space actions for the self-dual sector of theories with spin greater than two that produce the correct flat space-time spectrum. We identify a ghost-free subsector, analogous to the embedding of Einstein gravity with cosmological constant in Weyl gravity, which generates the unique spin-s three-point anti-MHV amplitude consistent with Poincaré invariance and helicity constraints. By including interactions between the infinite tower of higher-spin fields we give a geometric interpretation to the twistor equations of motion as the integrability condition for a holomorphic structure on an infinite jet bundle. Finally, we conjecture anti-self-dual interaction terms which give an implicit definition of a twistor action for the full conformal higher spin theory.
Fluid check valve has fail-safe feature
NASA Technical Reports Server (NTRS)
Gaul, L. C.
1965-01-01
Check valve ensures unidirectional fluid flow and, in case of failure, vents the downstream fluid to the atmosphere and gives a positive indication of malfunction. This dual valve consists of a master check valve and a fail-safe valve.
Liquid-hydrogen/nuclear-radiation resistant seals
NASA Technical Reports Server (NTRS)
Van Auken, R.
1971-01-01
Seal employs aromatic heterocyclic polymer, polyquinoxaline, and features resin starved laminate consisting of alternate layers of woven glass fabric and polymer film. Design gives gasket a mechanical spring characteristic, eliminating cold flow and resulting in elastic recovery when gasket is unloaded.
77 FR 28467 - Identifying and Reducing Regulatory Burdens
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-14
... online wherever practicable. Sec. 3. Setting Priorities. In implementing and improving their... regulatory priorities, to promote public participation in retrospective review, to modernize our regulatory..., agencies shall give priority, consistent with law, to those initiatives that will produce significant...
NASA Astrophysics Data System (ADS)
Mouadili, A.; El Boudouti, E. H.; Soltani, A.; Talbi, A.; Djafari-Rouhani, B.; Akjouj, A.; Haddadi, K.
2014-12-01
We give an analytical and experimental demonstration of a classical analogue of the electromagnetic induced absorption (EIA) in a simple photonic device consisting of two stubs of lengths d1 and d2 grafted at the same site along a waveguide. By detuning the lengths of the two stubs (i.e. δ = d2 - d1) we show that: (i) the amplitudes of the electromagnetic waves in the two stubs can be written following the two resonators model where each stub plays the role of a radiative resonator with low Q factor. The destructive interference between the waves in the two stubs may give rise to a sharp resonance peak with high Q factor in the transmission as well as in the absorption. (ii) The transmission coefficient around the resonance induced by the stubs can be written following a Fano-like form. In particular, we give an explicit expression of the position, width and Fano parameter of the resonances as a function of δ. (iii) By taking into account the loss in the waveguides, we show that at the transmission resonance, the transmission (reflection) increases (decreases) as a function of δ. Whereas the absorption goes through a maximum around 0.5 for a threshold value δth which depends on the attenuation in the system and then falls to zero. (iv) We give a comparison between the phase of the determinant of the scattering matrix, the so-called Friedel phase and the phase of the transmission amplitude. (v) The effect of the boundary conditions at the end of the resonators on the EIA resonance is also discussed. The analytical results are obtained by means of the Green's function method, whereas the experiments are carried out using coaxial cables in the radio-frequency regime. These results should have important consequences for designing integrated devices such as narrow-frequency optical or microwave filters and high-speed switches.
[Recognition of occupational cancers: review of existing methods and perspectives].
Vandentorren, Stéphanie; Salmi, L Rachid; Brochard, Patrick
2005-09-01
Occupational risk factors represent a significant part of cancer causes and are involved in all type of cancers. Nonetheless, the frequency of these cancers is largely under-estimated. Parallel to the epidemiological approach (collective), the concept of occupational cancer is often linked (at the individual level) to the compensation of occupational diseases. To give rise to a financial compensation, the occupational origin of the exposition has to be established for a given cancer. Whatever the method used to explore an occupational cause, the approach is that of an imputation. The aim of this work is to synthesize and describe the main principles of recognition of occupational cancers, to discuss the limits of available methods and to consider the research needed to improve these methods. In France, the recognition of a cancer's occupational origin consists in tables of occupational diseases that are based on presumption of causality. These tables consist in medical, technical and administrative conditions that are necessary and sufficient for the recognition of an occupational disease and its financial compensation. Whenever causality presumption does not apply, imputation is based on case analyses run by experts within regional committees of occupational diseases recognition that lack reproducibility. They do not allow statistical quantization and do not always take into account the weight of associated factors. Nonetheless, reliability and validity of the expertise could be reinforced by the use of formal consensus techniques. This process could ideally lead to the generation of decision-making algorithms that could guide the user towards the decision of imputing or not the cancer to an occupational exposure. This would be adapted to the build-up of new tables. The imputation process would be better represented by statistical methods based on the use of Bayes' theorem. The application of these methods to occupational cancers is promising but remains limited due to the lack of epidemiological data. Acquiring these data and diffusing these methods should become research and development priorities in the cancer field.
Review of Methods for Buildings Energy Performance Modelling
NASA Astrophysics Data System (ADS)
Krstić, Hrvoje; Teni, Mihaela
2017-10-01
Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting - replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance predictive model.
Acoustic emissions (AE) monitoring of large-scale composite bridge components
NASA Astrophysics Data System (ADS)
Velazquez, E.; Klein, D. J.; Robinson, M. J.; Kosmatka, J. B.
2008-03-01
Acoustic Emissions (AE) has been successfully used with composite structures to both locate and give a measure of damage accumulation. The current experimental study uses AE to monitor large-scale composite modular bridge components. The components consist of a carbon/epoxy beam structure as well as a composite to metallic bonded/bolted joint. The bonded joints consist of double lap aluminum splice plates bonded and bolted to carbon/epoxy laminates representing the tension rail of a beam. The AE system is used to monitor the bridge component during failure loading to assess the failure progression and using time of arrival to give insight into the origins of the failures. Also, a feature in the AE data called Cumulative Acoustic Emission counts (CAE) is used to give an estimate of the severity and rate of damage accumulation. For the bolted/bonded joints, the AE data is used to interpret the source and location of damage that induced failure in the joint. These results are used to investigate the use of bolts in conjunction with the bonded joint. A description of each of the components (beam and joint) is given with AE results. A summary of lessons learned for AE testing of large composite structures as well as insight into failure progression and location is presented.
NASA Astrophysics Data System (ADS)
Collins, John; Rogers, Ted
2015-04-01
There is considerable controversy about the size and importance of nonperturbative contributions to the evolution of transverse-momentum-dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that nonperturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and nonperturbative. We make a motivated proposal for the parametrization of the nonperturbative part of the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical nonperturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A (bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A (bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell-Yan experiments to measure the Sivers function.
A new classification scheme of plastic wastes based upon recycling labels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Özkan, Kemal, E-mail: kozkan@ogu.edu.tr; Ergin, Semih, E-mail: sergin@ogu.edu.tr; Işık, Şahin, E-mail: sahini@ogu.edu.tr
Highlights: • PET, HPDE or PP types of plastics are considered. • An automated classification of plastic bottles based on the feature extraction and classification methods is performed. • The decision mechanism consists of PCA, Kernel PCA, FLDA, SVD and Laplacian Eigenmaps methods. • SVM is selected to achieve the classification task and majority voting technique is used. - Abstract: Since recycling of materials is widely assumed to be environmentally and economically beneficial, reliable sorting and processing of waste packaging materials such as plastics is very important for recycling with high efficiency. An automated system that can quickly categorize thesemore » materials is certainly needed for obtaining maximum classification while maintaining high throughput. In this paper, first of all, the photographs of the plastic bottles have been taken and several preprocessing steps were carried out. The first preprocessing step is to extract the plastic area of a bottle from the background. Then, the morphological image operations are implemented. These operations are edge detection, noise removal, hole removing, image enhancement, and image segmentation. These morphological operations can be generally defined in terms of the combinations of erosion and dilation. The effect of bottle color as well as label are eliminated using these operations. Secondly, the pixel-wise intensity values of the plastic bottle images have been used together with the most popular subspace and statistical feature extraction methods to construct the feature vectors in this study. Only three types of plastics are considered due to higher existence ratio of them than the other plastic types in the world. The decision mechanism consists of five different feature extraction methods including as Principal Component Analysis (PCA), Kernel PCA (KPCA), Fisher’s Linear Discriminant Analysis (FLDA), Singular Value Decomposition (SVD) and Laplacian Eigenmaps (LEMAP) and uses a simple experimental setup with a camera and homogenous backlighting. Due to the giving global solution for a classification problem, Support Vector Machine (SVM) is selected to achieve the classification task and majority voting technique is used as the decision mechanism. This technique equally weights each classification result and assigns the given plastic object to the class that the most classification results agree on. The proposed classification scheme provides high accuracy rate, and also it is able to run in real-time applications. It can automatically classify the plastic bottle types with approximately 90% recognition accuracy. Besides this, the proposed methodology yields approximately 96% classification rate for the separation of PET or non-PET plastic types. It also gives 92% accuracy for the categorization of non-PET plastic types into HPDE or PP.« less
Grundlingh, A A; Grossman, E S; Shrivastava, S; Witcomb, M J
2013-10-01
This study compared digital and visual colour tooth colour assessment methods in a sample of 99 teeth consisting of incisors, canines and pre-molars. The teeth were equally divided between Control, Ozicure Oxygen Activator bleach and Opalescence Quick bleach and subjected to three treatments. Colour readings were recorded at nine intervals by two assessment methods, VITA Easyshade and VITAPAN 3D MASTER TOOTH GUIDE, giving a total of 1782 colour readings. Descriptive and statistical analysis was undertaken using a GLM test for Analysis of Variance for a Fractional Design set at a significance of P < 0.05. Atomic force micros copy was used to examine treated ename surfaces and establish surface roughness. Visual tooth colour assessment showed significance for the independent variables of treatment, number of treatments, tooth type and the combination tooth type and treatment. Digital colour assessment indicated treatment and tooth type to be of significance in tooth colour change. Poor agreement was found between visual and digital colour assessment methods for Control and Ozicure Oxygen Activator treatments. Surface roughness values increased two-fold for Opalescence Quick specimens over the two other treatments, implying that increased light scattering improved digital colour reading. Both digital and visual colour matching methods should be used in tooth bleaching studies to complement each other and to compensate for deficiencies.
Individual-specific antibody identification methods
Francoeur, Ann -Michele
1989-11-14
An identification method, applicable to the identification of animals or inanimate objects, is described. The method takes advantage of a hithertofore unknown set of individual-specific, or IS antibodies, that are part of the unique antibody repertoire present in animals, by reacting an effective amount of IS antibodies with a particular panel, or n-dimensional array (where n is typically one or two) consisting of an effective amount of many different antigens (typically greater than one thousand), to give antibody-antigen complexes. The profile or pattern formed by the antigen-antibody complexes, termed an antibody fingerprint, when revealed by an effective amount of an appropriate detector molecule, is uniquely representative of a particular individual. The method can similarly by used to distinguish genetically, or otherwise similar individuals, or their body parts containing IS antibodies. Identification of inanimate objects, particularly security documents, is similarly affected by associating with the documents, an effective amount of a particular individual's IS antibodies, or conversely, a particular panel of antigens, and forming antibody-antigen complexes with a particular panel of antigens, or a particular individual's IS antibodies, respectively. One embodiment of the instant identification method, termed the blocked fingerprint assay, has applications in the area of allergy testing, autoimmune diagnostics and therapeutics, and the detection of environmental antigens such as pathogens, chemicals, and toxins.
NASA Astrophysics Data System (ADS)
Chao, Winston C.; Yang, Bo; Fu, Xiouhua
2009-11-01
The popular method of presenting wavenumber-frequency power spectrum diagrams for studying tropical large-scale waves in the literature is shown to give an incomplete presentation of these waves. The so-called “convectively coupled Kelvin (mixed Rossby-gravity) waves” are presented as existing only in the symmetric (anti-symmetric) component of the diagrams. This is obviously not consistent with the published composite/regression studies of “convectively coupled Kelvin waves,” which illustrate the asymmetric nature of these waves. The cause of this inconsistency is revealed in this note and a revised method of presenting the power spectrum diagrams is proposed. When this revised method is used, “convectively coupled Kelvin waves” do show anti-symmetric components, and “convectively coupled mixed Rossby-gravity waves (also known as Yanai waves)” do show a hint of symmetric components. These results bolster a published proposal that these waves should be called “chimeric Kelvin waves,” “chimeric mixed Rossby-gravity waves,” etc. This revised method of presenting power spectrum diagrams offers an additional means of comparing the GCM output with observations by calling attention to the capability of GCMs to correctly simulate the asymmetric characteristics of equatorial waves.
Macarthur, Roy; Feinberg, Max; Bertheau, Yves
2010-01-01
A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.
Interaction between colloidal particles on an oil-water interface in dilute and dense phases.
Parolini, Lucia; Law, Adam D; Maestro, Armando; Buzza, D Martin A; Cicuta, Pietro
2015-05-20
The interaction between micron-sized charged colloidal particles at polar/non-polar liquid interfaces remains surprisingly poorly understood for a relatively simple physical chemistry system. By measuring the pair correlation function g(r) for different densities of polystyrene particles at the decane-water interface, and using a powerful predictor-corrector inversion scheme, effective pair-interaction potentials can be obtained up to fairly high densities, and these reproduce the experimental g(r) in forward simulations, so are self consistent. While at low densities these potentials agree with published dipole-dipole repulsion, measured by various methods, an apparent density dependence and long range attraction are obtained when the density is higher. This condition is thus explored in an alternative fashion, measuring the local mobility of colloids when confined by their neighbors. This method of extracting interaction potentials gives results that are consistent with dipolar repulsion throughout the concentration range, with the same magnitude as in the dilute limit. We are unable to rule out the density dependence based on the experimental accuracy of our data, but we show that incomplete equilibration of the experimental system, which would be possible despite long waiting times due to the very strong repulsions, is a possible cause of artefacts in the inverted potentials. We conclude that to within the precision of these measurements, the dilute pair potential remains valid at high density in this system.
NASA Astrophysics Data System (ADS)
Beddows, D. C. S.; Harrison, Roy M.
2018-06-01
A case study is provided of the development and application of methods to identify and quantify specific sources of emissions from within a large complex industrial site. Methods include directional analysis of concentrations, chemical source tracers and correlations with gaseous emissions. Extensive measurements of PM10, PM2.5, trace gases, particulate elements and single particle mass spectra were made at sites around the Port Talbot steelworks in 2012. By using wind direction data in conjunction with real-time or hourly-average pollutant concentration measurements, it has been possible to locate areas within the steelworks associated with enhanced pollutant emissions. Directional analysis highlights the Slag Handling area of the works as the most substantial source of elevated PM10 concentrations during the measurement period. Chemical analyses of air sampled from relevant wind directions is consistent with the anticipated composition of slags, as are single particle mass spectra. Elevated concentrations of PM10 are related to inverse distance from the Slag Handling area, and concentrations increase with increased wind speed, consistent with a wind-driven resuspension source. There also appears to be a lesser source associated with Sinter Plant emissions affecting PM10 concentrations at the Fire Station monitoring site. The results are compared with a ME2 study using some of the same data, and shown to give a clearer view of the location and characteristics of emission sources, including fugitive dusts.
NASA Astrophysics Data System (ADS)
Jiuxun, Sun; Qiang, Wu; Lingcang, Cai; Fuqian, Jing
2006-01-01
An equation of state (EOS) with high accuracy is proposed to strictly satisfy the Fermi gas limitation condition at high pressure. The EOS (SJX EOS) is a modification of the effective Rydberg (ER2) EOS. Instead of Holzapfel's method to directly modify the ER2 EOS, one modifying term is added to the ER2 EOS to make it not only satisfy the high pressure limitation condition, but also to avoid the disadvantages occurring in the Holzapfel and ‘adapted polynomial expansion of the order 3’ (AP3) EOSs. The two-parameter ER2, Holzapfel, and three-parameter SJX, AP3, Kumari and Dass (KD) EOSs are applied to 50 materials to fit all experimental compression data available. The five EOSs also are applied to 37 of the 50 materials to fit experimental compression data at low-pressure ranges. The results show that for all pressure ranges the AP3 EOS gives the best fitting results; the SJX, ER2, Holzapfel and KD EOSs sequentially give inferior results. Otherwise, it is shown that the values of B0, B0‧ and B0″ are different for different EOSs and also, within one EOS, for high and low-pressure ranges. The SJX EOS gives the best consistency between the values obtained by fitting all experimental data available, and the experimental data at low-pressure ranges, respectively. The AP3 EOS gives the worst results. The differences of the values of B0, B0‧ and B0″ obtained for the ER2, Holzapfel and KD EOSs with those obtained for the SJX EOS are large at high-pressure ranges, but decrease at low-pressure ranges. At present, the newest experimental compression data, within the widest compression range, are available for solid n-H 2. The values of B0, B0‧ and B0″ fitted by using the SJX EOS are almost in agreement with these experimental data. The ER2 EOS gives inferior values, and other EOSs give fairly bad results. For the predicted compression curves and the cohesive energy, the SJX EOS gives the best results; the AP3 EOS gives the worst results, even for many solids the AP3 EOS cannot give physically correct results for the cohesive energy. The analysis shows that for such solids, the variation of pressure and energy versus compression ratio calculated by using the AP3 EOS would oscillate, physically incorrectly. Although the AP3 EOS has the best fitting ability to the pressures, it has the worst predicting ability, and fails to be a universal EOS. The SJX EOS is recommended and can be taken as a candidate of universal EOSs to predict compression curves of solids in a wide pressure range only using the values of B0, B0‧ and B0″ obtained from low-pressure data.
Uncertainty-driven nuclear data evaluation including thermal (n,α) applied to 59Ni
NASA Astrophysics Data System (ADS)
Helgesson, P.; Sjöstrand, H.; Rochman, D.
2017-11-01
This paper presents a novel approach to the evaluation of nuclear data (ND), combining experimental data for thermal cross sections with resonance parameters and nuclear reaction modeling. The method involves sampling of various uncertain parameters, in particular uncertain components in experimental setups, and provides extensive covariance information, including consistent cross-channel correlations over the whole energy spectrum. The method is developed for, and applied to, 59Ni, but may be used as a whole, or in part, for other nuclides. 59Ni is particularly interesting since a substantial amount of 59Ni is produced in thermal nuclear reactors by neutron capture in 58Ni and since it has a non-threshold (n,α) cross section. Therefore, 59Ni gives a very important contribution to the helium production in stainless steel in a thermal reactor. However, current evaluated ND libraries contain old information for 59Ni, without any uncertainty information. The work includes a study of thermal cross section experiments and a novel combination of this experimental information, giving the full multivariate distribution of the thermal cross sections. In particular, the thermal (n,α) cross section is found to be 12.7 ± . 7 b. This is consistent with, but yet different from, current established values. Further, the distribution of thermal cross sections is combined with reported resonance parameters, and with TENDL-2015 data, to provide full random ENDF files; all of this is done in a novel way, keeping uncertainties and correlations in mind. The random files are also condensed into one single ENDF file with covariance information, which is now part of a beta version of JEFF 3.3. Finally, the random ENDF files have been processed and used in an MCNP model to study the helium production in stainless steel. The increase in the (n,α) rate due to 59Ni compared to fresh stainless steel is found to be a factor of 5.2 at a certain time in the reactor vessel, with a relative uncertainty due to the 59Ni data of 5.4%.
Barter System for Researchers: A Proposal. Opinion Paper
ERIC Educational Resources Information Center
Tebbutt, Arthur V.
1970-01-01
The barter system for researchers consists of two parts: a list linking various research areas with researchers willing to entertain collegial inquiriies, and a clerk to note the number of hours that participants give and receive in the information exchange. (MF)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehtomäki, Jouko; Makkonen, Ilja; Harju, Ari
We present a computational scheme for orbital-free density functional theory (OFDFT) that simultaneously provides access to all-electron values and preserves the OFDFT linear scaling as a function of the system size. Using the projector augmented-wave method (PAW) in combination with real-space methods, we overcome some obstacles faced by other available implementation schemes. Specifically, the advantages of using the PAW method are twofold. First, PAW reproduces all-electron values offering freedom in adjusting the convergence parameters and the atomic setups allow tuning the numerical accuracy per element. Second, PAW can provide a solution to some of the convergence problems exhibited in othermore » OFDFT implementations based on Kohn-Sham (KS) codes. Using PAW and real-space methods, our orbital-free results agree with the reference all-electron values with a mean absolute error of 10 meV and the number of iterations required by the self-consistent cycle is comparable to the KS method. The comparison of all-electron and pseudopotential bulk modulus and lattice constant reveal an enormous difference, demonstrating that in order to assess the performance of OFDFT functionals it is necessary to use implementations that obtain all-electron values. The proposed combination of methods is the most promising route currently available. We finally show that a parametrized kinetic energy functional can give lattice constants and bulk moduli comparable in accuracy to those obtained by the KS PBE method, exemplified with the case of diamond.« less
Ordinary differential equations.
Lebl, Jiří
2013-01-01
In this chapter we provide an overview of the basic theory of ordinary differential equations (ODE). We give the basics of analytical methods for their solutions and also review numerical methods. The chapter should serve as a primer for the basic application of ODEs and systems of ODEs in practice. As an example, we work out the equations arising in Michaelis-Menten kinetics and give a short introduction to using Matlab for their numerical solution.
Determination of slope failure using 2-D resistivity method
NASA Astrophysics Data System (ADS)
Muztaza, Nordiana Mohd; Saad, Rosli; Ismail, Nur Azwin; Bery, Andy Anderson
2017-07-01
Landslides and slope failure may give negative economic effects including the cost to repair structures, loss of property value and medical costs in the event of injury. To avoid landslide, slope failure and disturbance of the ecosystem, good and detailed planning must be done when developing hilly area. Slope failure classification and various factors contributing to the instability using 2-D resistivity survey conducted in Selangor, Malaysia are described. The study on landslide and slope failure was conducted at Site A and Site B, Selangor using 2-D resistivity method. The implications of the anticipated ground conditions as well as the field observation of the actual conditions are discussed. Nine 2-D resistivity survey lines were conducted in Site A and six 2-D resistivity survey lines with 5 m minimum electrode spacing using Pole-dipole array were performed in Site B. The data were processed using Res2Dinv and Surfer10 software to evaluate the subsurface characteristics. 2-D resistivity results from both locations show that the study areas consist of two main zones. The first zone is alluvium or highly weathered with the resistivity of 100-1000 Ωm at 20-70 m depth. This zone consists of saturated area (1-100 Ωm) and boulders with resistivity value of 1200-3000 Ωm. The second zone with resistivity values of > 3000 Ωm was interpreted as granitic bedrock. The study area was characterized by saturated zones, highly weathered zone, highly contain of sand and boulders that will trigger slope failure in the survey area. Based on the results obtained from the study findings, it can be concluded that 2-D resistivity method is useful method in determination of slope failure.
Charge distribution and transport properties in reduced ceria phases: A review
NASA Astrophysics Data System (ADS)
Shoko, E.; Smith, M. F.; McKenzie, Ross H.
2011-12-01
The question of the charge distribution in reduced ceria phases (CeO2-x) is important for understanding the microscopic physics of oxygen storage capacity, and the electronic and ionic conductivities in these materials. All these are key properties in the application of these materials in catalysis and electrochemical devices. Several approaches have been applied to study this problem, including ab initio methods. Recently [1], we applied the bond valence model (BVM) to discuss the charge distribution in several different crystallographic phases of reduced ceria. Here, we compare the BVM results to those from atomistic simulations to determine if there is consistency in the predictions of the two approaches. Our analysis shows that the two methods give a consistent picture of the charge distribution around oxygen vacancies in bulk reduced ceria phases. We then review the transport theory applicable to reduced ceria phases, providing useful relationships which enable comparison of experimental results obtained by different techniques. In particular, we compare transport parameters obtained from the observed optical absorption spectrum, α(ω), dc electrical conductivity with those predicted by small polaron theory and the Harrison method. The small polaron energy is comparable to that estimated from α(ω). However, we found a discrepancy between the value of the electron hopping matrix element, t, estimated from the Marcus-Hush formula and that obtained by the Harrison method. Part of this discrepancy could be attributed to the system lying in the crossover region between adiabatic and nonadiabatic whereas our calculations assumed the system to be nonadiabatic. Finally, by considering the relationship between the charge distribution and electronic conductivity, we suggest the possibility of low temperature metallic conductivity for intermediate phases, i.e., x˜0.3. This has not yet been experimentally observed.
The Principle of Energetic Consistency: Application to the Shallow-Water Equations
NASA Technical Reports Server (NTRS)
Cohn, Stephen E.
2009-01-01
If the complete state of the earth's atmosphere (e.g., pressure, temperature, winds and humidity, everywhere throughout the atmosphere) were known at any particular initial time, then solving the equations that govern the dynamical behavior of the atmosphere would give the complete state at all subsequent times. Part of the difficulty of weather prediction is that the governing equations can only be solved approximately, which is what weather prediction models do. But weather forecasts would still be far from perfect even if the equations could be solved exactly, because the atmospheric state is not and cannot be known completely at any initial forecast time. Rather, the initial state for a weather forecast can only be estimated from incomplete observations taken near the initial time, through a process known as data assimilation. Weather prediction models carry out their computations on a grid of points covering the earth's atmosphere. The formulation of these models is guided by a mathematical convergence theory which guarantees that, given the exact initial state, the model solution approaches the exact solution of the governing equations as the computational grid is made more fine. For the data assimilation process, however, there does not yet exist a convergence theory. This book chapter represents an effort to begin establishing a convergence theory for data assimilation methods. The main result, which is called the principle of energetic consistency, provides a necessary condition that a convergent method must satisfy. Current methods violate this principle, as shown in earlier work of the author, and therefore are not convergent. The principle is illustrated by showing how to apply it as a simple test of convergence for proposed methods.
Kitamura, Aya; Kawai, Yasuhiko
2015-01-01
Laminated alginate impression for edentulous is simple and time efficient compared to border molding technique. The purpose of this study was to examine clinical applicability of the laminated alginate impression, by measuring the effects of different Water/Powder (W/P) and mixing methods, and different bonding methods in the secondary impression of alginate impression. Three W/P: manufacturer-designated mixing water amount (standard), 1.5-fold (1.5×) and 1.75-fold (1.75×) water amount were mixed by manual and automatic mixing methods. Initial and complete setting time, permanent and elastic deformation, and consistency of the secondary impression were investigated (n=10). Additionally, tensile bond strength between the primary and secondary impression were measured in the following surface treatment; air blow only (A), surface baking (B), and alginate impression material bonding agent (ALGI-BOND: AB) (n=12). Initial setting times significantly shortened with automatic mixing for all W/P (p<0.05). The permanent deformation decreased and elastic deformation increased as high W/P, regardless of the mixing method. Elastic deformation significantly reduced in 1.5× and 1.75× with automatic mixing (p<0.05). All of these properties resulted within JIS standards. For all W/P, AB showed a significantly high bonding strength as compared to A and B (p<0.01). The increase of mixing water, 1.5× and 1.75×, resulted within JIS standards in setting time, suggesting its applicability in clinical setting. The use of automatic mixing device decreased elastic strain and shortening of the curing time. For the secondary impression application of adhesives on the primary impression gives secure adhesion. Copyright © 2014 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Dimensional transitions in thermodynamic properties of ideal Maxwell-Boltzmann gases
NASA Astrophysics Data System (ADS)
Aydin, Alhun; Sisman, Altug
2015-04-01
An ideal Maxwell-Boltzmann gas confined in various rectangular nanodomains is considered under quantum size effects. Thermodynamic quantities are calculated from their relations with the partition function, which consists of triple infinite summations over momentum states in each direction. To obtain analytical expressions, summations are converted to integrals for macrosystems by a continuum approximation, which fails at the nanoscale. To avoid both the numerical calculation of summations and the failure of their integral approximations at the nanoscale, a method which gives an analytical expression for a single particle partition function (SPPF) is proposed. It is shown that a dimensional transition in momentum space occurs at a certain magnitude of confinement. Therefore, to represent the SPPF by lower-dimensional analytical expressions becomes possible, rather than numerical calculation of summations. Considering rectangular domains with different aspect ratios, a comparison of the results of derived expressions with those of summation forms of the SPPF is made. It is shown that analytical expressions for the SPPF give very precise results with maximum relative errors of around 1%, 2% and 3% at exactly the transition point for single, double and triple transitions, respectively. Based on dimensional transitions, expressions for free energy, entropy, internal energy, chemical potential, heat capacity and pressure are given analytically valid for any scale.
Direct process estimation from tomographic data using artificial neural systems
NASA Astrophysics Data System (ADS)
Mohamad-Saleh, Junita; Hoyle, Brian S.; Podd, Frank J.; Spink, D. M.
2001-07-01
The paper deals with the goal of component fraction estimation in multicomponent flows, a critical measurement in many processes. Electrical capacitance tomography (ECT) is a well-researched sensing technique for this task, due to its low-cost, non-intrusion, and fast response. However, typical systems, which include practicable real-time reconstruction algorithms, give inaccurate results, and existing approaches to direct component fraction measurement are flow-regime dependent. In the investigation described, an artificial neural network approach is used to directly estimate the component fractions in gas-oil, gas-water, and gas-oil-water flows from ECT measurements. A 2D finite- element electric field model of a 12-electrode ECT sensor is used to simulate ECT measurements of various flow conditions. The raw measurements are reduced to a mutually independent set using principal components analysis and used with their corresponding component fractions to train multilayer feed-forward neural networks (MLFFNNs). The trained MLFFNNs are tested with patterns consisting of unlearned ECT simulated and plant measurements. Results included in the paper have a mean absolute error of less than 1% for the estimation of various multicomponent fractions of the permittivity distribution. They are also shown to give improved component fraction estimation compared to a well known direct ECT method.
A Study on the Reliability of Sasang Constitutional Body Trunk Measurement
Jang, Eunsu; Kim, Jong Yeol; Lee, Haejung; Kim, Honggie; Baek, Younghwa; Lee, Siwoo
2012-01-01
Objective. Body trunk measurement for human plays an important diagnostic role not only in conventional medicine but also in Sasang constitutional medicine (SCM). The Sasang constitutional body trunk measurement (SCBTM) consists of the 5-widths and the 8-circumferences which are standard locations currently employed in the SCM society. This study suggests to what extent a comprehensive training can improve the reliability of the SCBTM. Methods. We recruited 10 male subjects and 5 male observers with no experience of anthropometric measurement. We conducted measurements twice before and after a comprehensive training. Relative technical error of measurement (%TEMs) was produced to assess intra and inter observer reliabilities. Results. Post-training intra-observer %TEMs of the SCBTM were 0.27% to 1.85% reduced from 0.27% to 6.26% in pre-training, respectively. Post-training inter-observer %TEMs of those were 0.56% to 1.66% reduced from 1.00% to 9.60% in pre-training, respectively. Post-training % total TEMs which represent the whole reliability were 0.68% to 2.18% reduced from maximum value of 10.18%. Conclusion. A comprehensive training makes the SCBTM more reliable, hence giving a sufficiently confident diagnostic tool. It is strongly recommended to give a comprehensive training in advance to take the SCBTM. PMID:21822442
NASA Astrophysics Data System (ADS)
Peters, Andrew J.; Lawson, Richard A.; Nation, Benjamin D.; Ludovice, Peter J.; Henderson, Clifford L.
2016-01-01
State-of-the-art block copolymer (BCP)-directed self-assembly (DSA) methods still yield defect densities orders of magnitude higher than is necessary in semiconductor fabrication despite free-energy calculations that suggest equilibrium defect densities are much lower than is necessary for economic fabrication. This disparity suggests that the main problem may lie in the kinetics of defect removal. This work uses a coarse-grained model to study the rates, pathways, and dependencies of healing a common defect to give insight into the fundamental processes that control defect healing and give guidance on optimal process conditions for BCP-DSA. It is found that bulk simulations yield an exponential drop in defect heal rate above χN˜30. Thin films show no change in rate associated with the energy barrier below χN˜50, significantly higher than the χN values found previously for self-consistent field theory studies that neglect fluctuations. Above χN˜50, the simulations show an increase in energy barrier scaling with 1/2 to 1/3 of the bulk systems. This is because thin films always begin healing at the free interface or the BCP-underlayer interface, where the increased A-B contact area associated with the transition state is minimized, while the infinitely thick films cannot begin healing at an interface.
Schotes, Christoph; Mezzetti, Antonio
2011-01-01
We report here dicationic ruthenium PNNP complexes that promote the enantioselective Diels-Alder reaction of alpha-methylene beta-ketoesters with various dienes. Complex [Ru(OEt2)2(PNNP)](PF6)2, formed in situ from [RuCl2,(PNNP)] and (Et3O)PF6 (2 equiv.), catalyzes the Diels-Alder reaction of such unsaturated beta-ketoesters to give novel alkoxycarbonyltetrahydro-1-indanone derivatives (nine examples) with up to 93% ee. The crystal structure of the substrate-catalyst adduct shows that the lower face of the substrate is shielded by a phenyl ring of the PNNP ligand, which accounts for the high enantioselectivity. The attack of the diene from the open re enantioface of the unsaturated beta-ketoester is consistent with the absolute configuration of the product. A useful application of this method is the reaction with Dane's diene to give estrone derivatives with up to 99% ee and an ester-exo:endo ratio of up to 145:1 (after recrystallization). Besides the enantioselective formation of all-carbon quaternary centers, this methodology is notable because unsaturated beta-ketoesters have been rarely used in Diels-Alder reactions. Furthermore, enantiomerically pure estrone derivatives are interesting in view of their potential applications, including the treatment of breast cancer.
[Methods of protein gradient determination for diagnostic use in the clinical laboratory].
Schulz, D; Rothenhöfer, C
1982-02-25
Computed quotients of the concentrations of individual proteins in serum and in cerebrospinal fluid recorded in a semilogarithmic way against the hydrodynamic radius of the molecules may be connected and so give lineals, the positions of which are believed to estimate the actual function of the blood-brain-barrier (Felgenhauer et al. 1974; 1976). It is demonstrated by examinations of different samples of blood and cerebrospinal fluid, that both the described method and the simple measurement of total protein in cerebrospinal fluid are of the same power in their application to estimate the function of the blood-brain-barrier. However, the method introduced by Felgenhauer et al. enables to demonstrate immunoglobulins not having reached the cerebrospinal fluid by entering from the blood stream and so giving rise to diagnose a local humoral immune response within the central nervous system. In this way the method of Felgenhauer et al. gives advantages to the diagnostics of neurological diseases.
Characterization of molecularly imprinted polymers using a new polar solvent titration method.
Song, Di; Zhang, Yagang; Geer, Michael F; Shimizu, Ken D
2014-07-01
A new method of characterizing molecularly imprinted polymers (MIPs) was developed and tested, which provides a more accurate means of identifying and measuring the molecular imprinting effect. In the new polar solvent titration method, a series of imprinted and non-imprinted polymers were prepared in solutions containing increasing concentrations of a polar solvent. The polar solvent additives systematically disrupted the templation and monomer aggregation processes in the prepolymerization solutions, and the extent of disruption was captured by the polymerization process. The changes in binding capacity within each series of polymers were measured, providing a quantitative assessment of the templation and monomer aggregation processes in the imprinted and non-imprinted polymers. The new method was tested using three different diphenyl phosphate imprinted polymers made using three different urea functional monomers. Each monomer had varying efficiencies of templation and monomer aggregation. The new MIP characterization method was found to have several advantages. To independently verify the new characterization method, the MIPs were also characterized using traditional binding isotherm analyses. The two methods appeared to give consistent conclusions. First, the polar solvent titration method is less susceptible to false positives in identifying the imprinting effect. Second, the method is able to differentiate and quantify changes in binding capacity, as measured at a fixed guest and polymer concentration, arising from templation or monomer aggregation processes in the prepolymerization solution. Third, the method was also easy to carry out, taking advantage of the ease of preparing MIPs. Copyright © 2014 John Wiley & Sons, Ltd.
Foundation Design against Frost Action in Europe.
1988-03-01
the inside of the foundation wall and consisted of 50mm of ’Rockwool’ (a mineral wool ). This insulation guides heat from the house down toward the...Separation of the floor slab from the inner part of the foundation wall by introducing mineral wool (Fig. 49c) gives a further slight reduction in...construction consisting, for example, of wood and mineral wool and including air spaces. Rt is the thermal resistance of the composite floor from surface to
Geothermometry of Kilauea Iki lava lake, Hawaii
Helz, R.T.; Thornber, C.R.
1987-01-01
Data on the variation of temperature with time and in space are essential to a complete understanding of the crystallization history of basaltic magma in Kilauea Iki lava lake. Methods used to determine temperatures in the lake have included direct, downhole thermocouple measurements and Fe-Ti oxide geothermometry. In addition, the temperature variations of MgO and CaO contents of glasses, as determined in melting experiments on appropriate Kilauean samples, have been calibrated for use as purely empirical geothermometers and are directly applicable to interstitial glasses in olivine-bearing core from Kilauea Iki. The uncertainty in inferred quenching temperatures is ??8-10?? C. Comparison of the three methods shows that (1) oxide and glass geothermometry give results that are consistent with each other and consistent with the petrography and relative position of samples, (2) downhole thermo-couple measurements are low in all but the earliest, shallowest holes because the deeper holes never completely recover to predrilling temperatures, (3) glass geothermometry provides the greatest detail on temperature profiles in the partially molten zone, much of which is otherwise inaccessible, and (4) all three methods are necessary to construct a complete temperature profile for any given drill hole. Application of glass-based geothermometry to partially molten drill core recovered in 1975-1981 reveals in great detail the variation of temperature, in both time and space, within the partially molten zone of Kilauea Iki lava lake. The geothermometers developed here are also potentially applicable to glassy samples from other Kilauea lava lakes and to rapidly quenched lava samples from eruptions of Kilauea and Mauna Loa. ?? 1987 Springer-Verlag.
Geothermometry of Kilauea Iki lava lake, Hawaii
NASA Astrophysics Data System (ADS)
Helz, Rosalind Tuthill; Thornber, Carl R.
1987-10-01
Data on the variation of temperature with time and in space are essential to a complete understanding of the crystallization history of basaltic magma in Kilauea Iki lava lake. Methods used to determine temperatures in the lake have included direct, downhole thermocouple measurements and Fe-Ti oxide geothermometry. In addition, the temperature variations of MgO and CaO contents of glasses, as determined in melting experiments on appropriate Kilauean samples, have been calibrated for use as purely empirical geothermometers and are directly applicable to interstitial glasses in olivine-bearing core from Kilauea Iki. The uncertainty in inferred quenching temperatures is ±8-10° C. Comparison of the three methods shows that (1) oxide and glass geothermometry give results that are consistent with each other and consistent with the petrography and relative position of samples, (2) downhole thermo-couple measurements are low in all but the earliest, shallowest holes because the deeper holes never completely recover to predrilling temperatures, (3) glass geothermometry provides the greatest detail on temperature profiles in the partially molten zone, much of which is otherwise inaccessible, and (4) all three methods are necessary to construct a complete temperature profile for any given drill hole. Application of glass-based geothermometry to partially molten drill core recovered in 1975 1981 reveals in great detail the variation of temperature, in both time and space, within the partially molten zone of Kilauea Iki lava lake. The geothermometers developed here are also potentially applicable to glassy samples from other Kilauea lava lakes and to rapidly quenched lava samples from eruptions of Kilauea and Mauna Loa.
Comparing deflection measurements of a magnetically steerable catheter using optical imaging and MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lillaney, Prasheel, E-mail: Prasheel.Lillaney@ucsf.edu; Caton, Curtis; Martin, Alastair J.
2014-02-15
Purpose: Magnetic resonance imaging (MRI) is an emerging modality for interventional radiology, giving clinicians another tool for minimally invasive image-guided interventional procedures. Difficulties associated with endovascular catheter navigation using MRI guidance led to the development of a magnetically steerable catheter. The focus of this study was to mechanically characterize deflections of two different prototypes of the magnetically steerable catheterin vitro to better understand their efficacy. Methods: A mathematical model for deflection of the magnetically steerable catheter is formulated based on the principle that at equilibrium the mechanical and magnetic torques are equal to each other. Furthermore, two different image basedmore » methods for empirically measuring the catheter deflection angle are presented. The first, referred to as the absolute tip method, measures the angle of the line that is tangential to the catheter tip. The second, referred to the base to tip method, is an approximation that is used when it is not possible to measure the angle of the tangent line. Optical images of the catheter deflection are analyzed using the absolute tip method to quantitatively validate the predicted deflections from the mathematical model. Optical images of the catheter deflection are also analyzed using the base to tip method to quantitatively determine the differences between the absolute tip and base to tip methods. Finally, the optical images are compared to MR images using the base to tip method to determine the accuracy of measuring the catheter deflection using MR. Results: The optical catheter deflection angles measured for both catheter prototypes using the absolute tip method fit very well to the mathematical model (R{sup 2} = 0.91 and 0.86 for each prototype, respectively). It was found that the angles measured using the base to tip method were consistently smaller than those measured using the absolute tip method. The deflection angles measured using optical data did not demonstrate a significant difference from the angles measured using MR image data when compared using the base to tip method. Conclusions: This study validates the theoretical description of the magnetically steerable catheter, while also giving insight into different methods and modalities for measuring the deflection angles of the prototype catheters. These results can be used to mechanically model future iterations of the design. Quantifying the difference between the different methods for measuring catheter deflection will be important when making deflection measurements in future studies. Finally, MR images can be used to reliably measure deflection angles since there is no significant difference between the MR and optical measurements.« less
NASA Technical Reports Server (NTRS)
Prost, L.; Pauillac, A.
1978-01-01
Experience has shown that different methods of analysis of SiC products give different results. Methods identified as AFNOR, FEPA, and manufacturer P, currently used to detect SiC, free C, free Si, free Fe, and SiO2 are reviewed. The AFNOR method gives lower SiC content, attributed to destruction of SiC by grinding. Two products sent to independent labs for analysis by the AFNOR and FEPA methods showed somewhat different results, especially for SiC, SiO2, and Al2O3 content, whereas an X-ray analysis showed a SiC content approximately 10 points lower than by chemical methods.
Geometric Hitting Set for Segments of Few Orientations
Fekete, Sandor P.; Huang, Kan; Mitchell, Joseph S. B.; ...
2016-01-13
Here we study several natural instances of the geometric hitting set problem for input consisting of sets of line segments (and rays, lines) having a small number of distinct slopes. These problems model path monitoring (e.g., on road networks) using the fewest sensors (the \\hitting points"). We give approximation algorithms for cases including (i) lines of 3 slopes in the plane, (ii) vertical lines and horizontal segments, (iii) pairs of horizontal/vertical segments. Lastly, we give hardness and hardness of approximation results for these problems. We prove that the hitting set problem for vertical lines and horizontal rays is polynomially solvable.
NASA Astrophysics Data System (ADS)
Deleuze, M. D.; Bernardi, P. B.; Caïs, Ph. C.; Perez, R. P.; Rees, J. M. R.; Pares, L. P.; Dubois, B. D.; Parot, Y. P.; Quertier, B. Q.; Maurice, S. M.; Maccabe, K. M.; Wiens, R. W.; Rull, F. R.
2016-10-01
This paper will describe and give a development status of SuperCam's mast unit. SuperCam will be carried on the Mars 2020 rover, and consists in an enhanced version of the ChemCam LIBS which is still performing at the surface of Mars, on Curiosity.
INTERIM RADON-RESISTANT CONTRUCTION GUIDELINES FOR USE IN FLORIDA, 1989
The report gives results of a project to investigate, analyze, and develop radon-resistant construction guidelines that are consistent with other building codes and that could be applied to Florida. Literature search resulted in information on radon remediation techniques, new c...
ALLOY FOR USE IN NUCLEAR FISSION
Spedding, F.A.; Wilhelm, H.A.
1958-03-11
This patent relates to an alloy composition capable of functioning as a solid homogeneous reactor fuel. The alloy consists of a beryllium moderator, together with at least 0.7% of U/sup 235/, and up to 50% thorium to give increased workability to the alloy.
Shear Lag in Box Beams Methods of Analysis and Experimental Investigations
NASA Technical Reports Server (NTRS)
Kuhn, Paul; Chiarito, Patrick T
1942-01-01
The bending stresses in the covers of box beams or wide-flange beams differ appreciably from the stresses predicted by the ordinary bending theory on account of shear deformation of the flanges. The problem of predicting these differences has become known as the shear-lag problem. The first part of this paper deals with methods of shear-lag analysis suitable for practical use. The second part of the paper describes strain-gage tests made by the NACA to verify the theory. Three tests published by other investigators are also analyzed by the proposed method. The third part of the paper gives numerical examples illustrating the methods of analysis. An appendix gives comparisons with other methods, particularly with the method of Ebner and Koller.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mínguez, Pablo, E-mail: pablo.minguezgabina@osakidetza.net; Flux, Glenn; Genollá, José
2015-07-15
Purpose: The aim was to investigate whole-body and red marrow absorbed doses in treatments of neuroblastoma (NB) and adult neuroendocrine tumors (NETs) with {sup 131}I-metaiodobenzylguanidine and to propose a simple method for determining the activity to administer when dosimetric data for the individual patient are not available. Methods: Nine NB patients and six NET patients were included, giving in total 19 treatments as four patients were treated twice. Whole-body absorbed doses were determined from dose-rate measurements and planar gamma-camera imaging. For six NB and five NET treatments, red marrow absorbed doses were also determined using the blood-based method. Results: Dosimetricmore » data from repeated administrations in the same patient were consistent. In groups of NB and NET patients, similar whole-body residence times were obtained, implying that whole-body absorbed dose per unit of administered activity could be reasonably well described as a power function of the patient mass. For NB, this functional form was found to be consistent with dosimetric data from previously published studies. The whole-body to red marrow absorbed dose ratio was similar among patients, with values of 1.4 ± 0.6–1.7 ± 0.7 (1 standard deviation) in NB treatments and between 1.5 ± 0.6 and 1.7 ± 0.7 (1 standard deviation) in NET treatments. Conclusions: The consistency of dosimetric results between administrations for the same patient supports prescription of the activity based on dosimetry performed in pretreatment studies, or during the first administration in a fractionated schedule. The expressions obtained for whole-body absorbed doses per unit of administered activity as a function of patient mass for NB and NET treatments are believed to be a useful tool to estimate the activity to administer at the stage when the individual patient biokinetics has not yet been measured.« less
System and method for trapping and measuring a charged particle in a liquid
Reed, Mark A; Krstic, Predrag S; Guan, Weihua; Zhao, Xiongce
2013-07-23
A system and method for trapping a charged particle is disclosed. A time-varying periodic multipole electric potential is generated in a trapping volume. A charged particle under the influence of the multipole electric field is confined to the trapping volume. A three electrode configuration giving rise to a 3D Paul trap and a four planar electrode configuration giving rise to a 2D Paul trap are disclosed.
System and method for trapping and measuring a charged particle in a liquid
Reed, Mark A; Krstic, Predrag S; Guan, Weihua; Zhao, Xiongce
2012-10-23
A system and method for trapping a charged particle is disclosed. A time-varying periodic multipole electric potential is generated in a trapping volume. A charged particle under the influence of the multipole electric field is confined to the trapping volume. A three electrode configuration giving rise to a 3D Paul trap and a four planar electrode configuration giving rise to a 2D Paul trap are disclosed.
An investigation of the vortex method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, Jr., Duaine Wright
The vortex method is a numerical scheme for solving the vorticity transport equation. Chorin introduced modern vortex methods. The vortex method is a Lagrangian, grid free method which has less intrinsic diffusion than many grid schemes. It is adaptive in the sense that elements are needed only where the vorticity is non-zero. Our description of vortex methods begins with the point vortex method of Rosenhead for two dimensional inviscid flow, and builds upon it to eventually cover the case of three dimensional slightly viscous flow with boundaries. This section gives an introduction to the fundamentals of the vortex method. Thismore » is done in order to give a basic impression of the previous work and its line of development, as well as develop some notation and concepts which will be used later. The purpose here is not to give a full review of vortex methods or the contributions made by all the researchers in the field. Please refer to the excellent review papers in Sethian and Gustafson, chapters 1 Sethian, 2 Hald, 3 Sethian, 8 Chorin provide a solid introduction to vortex methods, including convergence theory, application in two dimensions and connection to statistical mechanics and polymers. Much of the information in this review is taken from those chapters, Chorin and Marsden and Batchelor, the chapters are also useful for their extensive bibliographies.« less
NASA Technical Reports Server (NTRS)
Sehgal, Neelima; Trac, Hy; Acquaviva, Viviana; Ade, Peter A. R.; Aguirre, Paula; Amiri, Mandana; Appel, John W.; Barrientos, L. Felipe; Battistelli, Elia S.; Bond, J. Richard;
2010-01-01
We present constraints on cosmological parameters based on a sample of Sunyaev-Zel'dovich-selected galaxy clusters detected in a millimeter-wave survey by the Atacama Cosmology Telescope. The cluster sample used in this analysis consists of 9 optically-confirmed high-mass clusters comprising the high-significance end of the total cluster sample identified in 455 square degrees of sky surveyed during 2008 at 148 GHz. We focus on the most massive systems to reduce the degeneracy between unknown cluster astrophysics and cosmology derived from SZ surveys. We describe the scaling relation between cluster mass and SZ signal with a 4-parameter fit. Marginalizing over the values of the parameters in this fit with conservative priors gives (sigma)8 = 0.851 +/- 0.115 and w = -1.14 +/- 0.35 for a spatially-flat wCDM cosmological model with WMAP 7-year priors on cosmological parameters. This gives a modest improvement in statistical uncertainty over WMAP 7-year constraints alone. Fixing the scaling relation between cluster mass and SZ signal to a fiducial relation obtained from numerical simulations and calibrated by X-ray observations, we find (sigma)8 + 0.821 +/- 0.044 and w = -1.05 +/- 0.20. These results are consistent with constraints from WMAP 7 plus baryon acoustic oscillations plus type Ia supernova which give (sigma)8 = 0.802 +/- 0.038 and w = -0.98 +/- 0.053. A stacking analysis of the clusters in this sample compared to clusters simulated assuming the fiducial model also shows good agreement. These results suggest that, given the sample of clusters used here, both the astrophysics of massive clusters and the cosmological parameters derived from them are broadly consistent with current models.
Turbulent diffusion with memories and intrinsic shear
NASA Technical Reports Server (NTRS)
Tchen, C. M.
1974-01-01
The first part of the present theory is devoted to the derivation of a Fokker-Planck equation. The eddies smaller than the hydrodynamic scale of the diffusion cloud form a diffusivity, while the inhomogeneous, bigger eddies give rise to a nonuniform migratory drift. This introduces an eddy-induced shear which reflects on the large-scale diffusion. The eddy-induced shear does not require the presence of a permanent wind shear and is intrinsic to the diffusion. Secondly, a transport theory of diffusivity is developed by the method of repeated-cascade and is based upon a relaxation of a chain of memories with decreasing information. The full range of diffusion consists of inertia, composite, and shear subranges, for which variance and eddy diffusivities are predicted. The coefficients are evaluated. Comparison with experiments in the upper atmosphere and oceans is made.
NASA Astrophysics Data System (ADS)
Jatmiko, P. C.; Madinah, N. A.; Agustono; Nurhajati, T.
2018-04-01
Earthworms (Lumbricus rubellus) has high protein content. The addition of earthworms in formulation feed not only can increase the appetite of eel but also increase the nutritional content in feed. The purpose of this research was to know the potention of earthworms L. rubellus in feed formulation that can gives increase on the growth and retention. Research’s method was using Complete Randomized Design (CRD) consisted of five treatments and four replication. Treatments in this research ware the different addition of earthworms L. rubellus in feed formulation which were 0 %, 25 %, 50 %, 75 % and 100 %. The result showed that there were significantly different on the growth and retention of eel during maintenance for 21 days. the best result was on the 100% of earthworms L.rubellus addition.
Effect of Interaction on the Majorana Zero Modes in the Kitaev Chain at Half Filling
NASA Astrophysics Data System (ADS)
Li, Zhidan; Han, Qiang
2018-04-01
The one dimension interacting Kitaev chain at half filling is studied. The symmetry of the Hamiltonian is examined by dual transformations and various physical quantities as functions of the fermion-fermion interaction $U$ are calculated systematically using the density matrix renormalization group method. A special value of interaction $U_p$ is revealed in the topological region of the phase diagram. We show that at $U_p$ the ground states are strictly two-fold degenerate even though the chain length is finite and the zero-energy peak due to the Majorana zero modes is maximally enhanced and exactly localized at the end sites. $U_p$ may be attractive or repulsive depending on other system parameters. We also give a qualitative understanding of the effect of interaction under the self-consistent mean field framework.
Quasi-one dimensional (Q1D) nanostructures: Synthesis, integration and device application
NASA Astrophysics Data System (ADS)
Chien, Chung-Jen
Quasi-one-dimensional (Q1D) nanostructures such as nanotubes and nanowires have been widely regarded as the potential building blocks for nanoscale electronic, optoelectronic and sensing devices. In this work, the content can be divided into three categories: Nano-material synthesis and characterizations, alignment and integration, physical properties and application. The dissertation consists of seven chapters as following. Chapter 1 will give an introduction to low dimensional nano-materials. Chapter 2 explains the mechanism how Q1D nanostructure grows. Chapter 3 describes the methods how we horizontally and vertically align the Q1D nanostructure. Chapter 4 and 5 are the electrical and optical device characterization respectively. Chapter 6 demonstrates the integration of Q1D nanostructures and the device application. The last chapter will discuss the future work and conclusion of the thesis.
PHYSICAL MODEL FOR RECOGNITION TUNNELING
Krstić, Predrag; Ashcroft, Brian; Lindsay, Stuart
2015-01-01
Recognition tunneling (RT) identifies target molecules trapped between tunneling electrodes functionalized with recognition molecules that serve as specific chemical linkages between the metal electrodes and the trapped target molecule. Possible applications include single molecule DNA and protein sequencing. This paper addresses several fundamental aspects of RT by multiscale theory, applying both all-atom and coarse-grained DNA models: (1) We show that the magnitude of the observed currents are consistent with the results of non-equilibrium Green's function calculations carried out on a solvated all-atom model. (2) Brownian fluctuations in hydrogen bond-lengths lead to current spikes that are similar to what is observed experimentally. (3) The frequency characteristics of these fluctuations can be used to identify the trapped molecules with a machine-learning algorithm, giving a theoretical underpinning to this new method of identifying single molecule signals. PMID:25650375
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, A. C.; Guttormsen, M.; Schwengner, R.
The nuclear level density and the g-ray strength function have been extracted for 89Y, using the Oslo Method on 89Y(p,p'γ) 89Y coincidence data. The g-ray strength function displays a low-energy enhancement consistent with previous observations in this mass region ( 93-98Mo). Shell-model calculations give support that the observed enhancement is due to strong, low-energy M1 transitions at high excitation energies. The data were further used as input for calculations of the 88Sr(p,γ) 89Y and 88Y(n,γ) 89Y cross sections with the TALYS reaction code. Lastly, comparison with cross-section data, where available, as well as with values from the BRUSLIB library, showsmore » a satisfying agreement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azadi, Sam, E-mail: s.azadi@ucl.ac.uk; Cohen, R. E.; Department of Earth- and Environmental Sciences, Ludwig Maximilians Universität, Munich 80333
We studied the low-pressure (0–10 GPa) phase diagram of crystalline benzene using quantum Monte Carlo and density functional theory (DFT) methods. We performed diffusion quantum Monte Carlo (DMC) calculations to obtain accurate static phase diagrams as benchmarks for modern van der Waals density functionals. Using density functional perturbation theory, we computed the phonon contributions to the free energies. Our DFT enthalpy-pressure phase diagrams indicate that the Pbca and P2{sub 1}/c structures are the most stable phases within the studied pressure range. The DMC Gibbs free-energy calculations predict that the room temperature Pbca to P2{sub 1}/c phase transition occurs at 2.1(1)more » GPa. This prediction is consistent with available experimental results at room temperature. Our DMC calculations give 50.6 ± 0.5 kJ/mol for crystalline benzene lattice energy.« less
NASA Astrophysics Data System (ADS)
D'Astous, Y.; Blanchard, M.
1982-05-01
In the past years, the Journal has published a number of articles1-5 devoted to the introduction of Fourier transform spectroscopy in the undergraduate labs. In most papers, the proposed experimental setup consists of a Michelson interferometer, a light source, a light detector, and a chart recorder. The student uses this setup to record an interferogram which is then Fourier transformed to obtain the spectrogram of the light source. Although attempts have been made to ease the task of performing the required Fourier transform,6 the use of computers and Cooley-Tukey's fast Fourier transform (FFT) algorithm7 is by far the simplest method to use. However, to be able to use FFT, one has to get a number of samples of the interferogram, a tedious job which should be kept to a minimum. (AIP)
Learning-assisted theorem proving with millions of lemmas☆
Kaliszyk, Cezary; Urban, Josef
2015-01-01
Large formal mathematical libraries consist of millions of atomic inference steps that give rise to a corresponding number of proved statements (lemmas). Analogously to the informal mathematical practice, only a tiny fraction of such statements is named and re-used in later proofs by formal mathematicians. In this work, we suggest and implement criteria defining the estimated usefulness of the HOL Light lemmas for proving further theorems. We use these criteria to mine the large inference graph of the lemmas in the HOL Light and Flyspeck libraries, adding up to millions of the best lemmas to the pool of statements that can be re-used in later proofs. We show that in combination with learning-based relevance filtering, such methods significantly strengthen automated theorem proving of new conjectures over large formal mathematical libraries such as Flyspeck. PMID:26525678
Grytsenko, Konstantin; Lozovski, Valeri; Strilchuk, Galyna; Schrader, Sigurd
2012-11-07
Nanocomposite films consisting of gold inclusions in the polytetrafluoroethylene (PTFE) matrix were obtained by thermal vacuum deposition. Annealing of the obtained films with different temperatures was used to measure varying of film morphologies. The dependence of optical properties of the films on their morphology was studied. It was established that absorption and profile of the nanocomposite film obtained by thermal vacuum deposition can be changed with annealing owing to the fact that different annealing temperatures lead to different average particle sizes. A method to calculate the optical properties of nanocomposite thin films with inclusions of different sizes was proposed. Thus, comparison of experimental optical spectra with the spectra obtained during the simulation enables estimating average sizes of inclusions. The calculations give the possibility of understanding morphological changes in the structures.
Larsen, A. C.; Guttormsen, M.; Schwengner, R.; ...
2016-04-21
The nuclear level density and the g-ray strength function have been extracted for 89Y, using the Oslo Method on 89Y(p,p'γ) 89Y coincidence data. The g-ray strength function displays a low-energy enhancement consistent with previous observations in this mass region ( 93-98Mo). Shell-model calculations give support that the observed enhancement is due to strong, low-energy M1 transitions at high excitation energies. The data were further used as input for calculations of the 88Sr(p,γ) 89Y and 88Y(n,γ) 89Y cross sections with the TALYS reaction code. Lastly, comparison with cross-section data, where available, as well as with values from the BRUSLIB library, showsmore » a satisfying agreement.« less
Aedes aegypti pupal/demographic surveys in southern Mexico: consistency and practicality.
Arredondo-Jiménez, J I; Valdez-Delgado, K M
2006-04-01
In interventions aimed at the control of the immature stages of Aedes aegypti (L.), the principal vector of the dengue viruses, attempts are often made to treat or manage all larval habitats in households. When there are resource-constraints, however, a concentration of effort on the types of container that produce the most pupae may be required. Identification of these 'key' container types requires surveys of the immature stages and particularly - since these give the best estimates of the numbers of adults produced - of the numbers of pupae in local containers. Although there has been no clearly defined or standardized protocol for the sampling of Ae. aegypti pupae for many years, a methodology for 'pupal/demographic' surveys, which may allow the risk of dengue outbreaks in a given setting to be estimated, has been recently described. The consistency and practicality of using such surveys has now been investigated in three cities in the Mexican state of Chiapas, Mexico. Using a combination of 'quadrat'- and transect-sampling methods, 600 houses in each city were each sampled twice. Containers within each study household were searched for pupae and larvae. Although 107,297 containers, belonging to 26 categories, were observed, only 16,032 were found to contain water and 96% and 92% of these 'wet' containers contained no pupae and no third- or fourth-instar larvae, respectively. Although the random 'quadrat' sampling gave similar results to sampling along transects, there were statistically significant differences in the numbers of pupae according to container type and locality. The most important containers for pupal production were found to be large cement wash basins, which were present in almost every household investigated and from which 84% (10,257/12,271) of all pupae were collected. A focus on this class of container could serve as the basis of a targeted intervention strategy. When traditional Stegomyia indices were calculated they appeared to be correlated with the assessments of pupal abundance. The methodology for pupal/demographic surveys appears to be practical and to give consistent results, although it remains to be seen if monitoring of pupal productivity can adequately reflect the impact of vector-control interventions.
Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme
NASA Astrophysics Data System (ADS)
Liu, Xianglin; Wang, Yang; Eisenbach, Markus; Stocks, G. Malcolm
2018-03-01
The Green function plays an essential role in the Korringa-Kohn-Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn-Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). The pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. By using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.
Audit method suited for DSS in clinical environment.
Vicente, Javier
2015-01-01
This chapter presents a novel online method to audit predictive models using a Bayesian perspective. The auditing model has been specifically designed for Decision Support Systems (DSSs) suited for clinical or research environments. Taking as starting point the working diagnosis supplied by the clinician, this method compares and evaluates the predictive skills of those models able to answer to that diagnosis. The approach consists in calculating the posterior odds of a model through the composition of a prior odds, a static odds, and a dynamic odds. To do so, this method estimates the posterior odds from the cases that the comparing models had in common during the design stage and from the cases already viewed by the DSS after deployment in the clinical site. In addition, if an ontology of the classes is available, this method can audit models answering related questions, which offers a reinforcement to the decisions the user already took and gives orientation on further diagnostic steps.The main technical novelty of this approach lies in the design of an audit model adapted to suit the decision workflow of a clinical environment. The audit model allows deciding what is the classifier that best suits each particular case under evaluation and allows the detection of possible misbehaviours due to population differences or data shifts in the clinical site. We show the efficacy of our method for the problem of brain tumor diagnosis with Magnetic Resonance Spectroscopy (MRS).
Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xianglin; Wang, Yang; Eisenbach, Markus
The Green function plays an essential role in the Korringa–Kohn–Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn–Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). Themore » pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. Here, by using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.« less
Threshold selection for classification of MR brain images by clustering method
NASA Astrophysics Data System (ADS)
Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita
2015-12-01
Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.
Fully-relativistic full-potential multiple scattering theory: A pathology-free scheme
Liu, Xianglin; Wang, Yang; Eisenbach, Markus; ...
2017-10-28
The Green function plays an essential role in the Korringa–Kohn–Rostoker(KKR) multiple scattering method. In practice, it is constructed from the regular and irregular solutions of the local Kohn–Sham equation and robust methods exist for spherical potentials. However, when applied to a non-spherical potential, numerical errors from the irregular solutions give rise to pathological behaviors of the charge density at small radius. Here we present a full-potential implementation of the fully-relativistic KKR method to perform ab initio self-consistent calculation by directly solving the Dirac differential equations using the generalized variable phase (sine and cosine matrices) formalism Liu et al. (2016). Themore » pathology around the origin is completely eliminated by carrying out the energy integration of the single-site Green function along the real axis. Here, by using an efficient pole-searching technique to identify the zeros of the well-behaved Jost matrices, we demonstrated that this scheme is numerically stable and computationally efficient, with speed comparable to the conventional contour energy integration method, while free of the pathology problem of the charge density. As an application, this method is utilized to investigate the crystal structures of polonium and their bulk properties, which is challenging for a conventional real-energy scheme. The noble metals are also calculated, both as a test of our method and to study the relativistic effects.« less
Searching for exoplanets using artificial intelligence
NASA Astrophysics Data System (ADS)
Pearson, Kyle A.; Palafox, Leon; Griffith, Caitlin A.
2018-02-01
In the last decade, over a million stars were monitored to detect transiting planets. Manual interpretation of potential exoplanet candidates is labor intensive and subject to human error, the results of which are difficult to quantify. Here we present a new method of detecting exoplanet candidates in large planetary search projects which, unlike current methods uses a neural network. Neural networks, also called "deep learning" or "deep nets" are designed to give a computer perception into a specific problem by training it to recognize patterns. Unlike past transit detection algorithms deep nets learn to recognize planet features instead of relying on hand-coded metrics that humans perceive as the most representative. Our convolutional neural network is capable of detecting Earth-like exoplanets in noisy time-series data with a greater accuracy than a least-squares method. Deep nets are highly generalizable allowing data to be evaluated from different time series after interpolation without compromising performance. As validated by our deep net analysis of Kepler light curves, we detect periodic transits consistent with the true period without any model fitting. Our study indicates that machine learning will facilitate the characterization of exoplanets in future analysis of large astronomy data sets.
Fuzzy MCDM Technique for Planning the Environment Watershed
NASA Astrophysics Data System (ADS)
Chen, Yi-Chun; Lien, Hui-Pang; Tzeng, Gwo-Hshiung; Yang, Lung-Shih; Yen, Leon
In the real word, the decision making problems are very vague and uncertain in a number of ways. The most criteria have interdependent and interactive features so they cannot be evaluated by conventional measures method. Such as the feasibility, thus, to approximate the human subjective evaluation process, it would be more suitable to apply a fuzzy method in environment-watershed plan topic. This paper describes the design of a fuzzy decision support system in multi-criteria analysis approach for selecting the best plan alternatives or strategies in environmentwatershed. The Fuzzy Analytic Hierarchy Process (FAHP) method is used to determine the preference weightings of criteria for decision makers by subjective perception. A questionnaire was used to find out from three related groups comprising fifteen experts. Subjectivity and vagueness analysis is dealt with the criteria and alternatives for selection process and simulation results by using fuzzy numbers with linguistic terms. Incorporated the decision makers’ attitude towards preference, overall performance value of each alternative can be obtained based on the concept of Fuzzy Multiple Criteria Decision Making (FMCDM). This research also gives an example of evaluating consisting of five alternatives, solicited from a environmentwatershed plan works in Taiwan, is illustrated to demonstrate the effectiveness and usefulness of the proposed approach.
Robust curb detection with fusion of 3D-Lidar and camera data.
Tan, Jun; Li, Jian; An, Xiangjing; He, Hangen
2014-05-21
Curb detection is an essential component of Autonomous Land Vehicles (ALV), especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb's geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.
Abstract Interpreters for Free
NASA Astrophysics Data System (ADS)
Might, Matthew
In small-step abstract interpretations, the concrete and abstract semantics bear an uncanny resemblance. In this work, we present an analysis-design methodology that both explains and exploits that resemblance. Specifically, we present a two-step method to convert a small-step concrete semantics into a family of sound, computable abstract interpretations. The first step re-factors the concrete state-space to eliminate recursive structure; this refactoring of the state-space simultaneously determines a store-passing-style transformation on the underlying concrete semantics. The second step uses inference rules to generate an abstract state-space and a Galois connection simultaneously. The Galois connection allows the calculation of the "optimal" abstract interpretation. The two-step process is unambiguous, but nondeterministic: at each step, analysis designers face choices. Some of these choices ultimately influence properties such as flow-, field- and context-sensitivity. Thus, under the method, we can give the emergence of these properties a graph-theoretic characterization. To illustrate the method, we systematically abstract the continuation-passing style lambda calculus to arrive at two distinct families of analyses. The first is the well-known k-CFA family of analyses. The second consists of novel "environment-centric" abstract interpretations, none of which appear in the literature on static analysis of higher-order programs.
Tickling, a Technique for Inducing Positive Affect When Handling Rats.
Cloutier, Sylvie; LaFollette, Megan R; Gaskill, Brianna N; Panksepp, Jaak; Newberry, Ruth C
2018-05-08
Handling small animals such as rats can lead to several adverse effects. These include the fear of humans, resistance to handling, increased injury risk for both the animals and the hands of their handlers, decreased animal welfare, and less valid research data. To minimize negative effects on experimental results and human-animal relationships, research animals are often habituated to being handled. However, the methods of habituation are highly variable and often of limited effectiveness. More potently, it is possible for humans to mimic aspects of the animals' playful rough-and-tumble behavior during handling. When applied to laboratory rats in a systematic manner, this playful handling, referred to as tickling, consistently gives rise to positive behavioral responses. This article provides a detailed description of a standardized rat tickling technique. This method can contribute to future investigations into positive affective states in animals, make it easier to handle rats for common husbandry activities such as cage changing or medical/research procedures such as injection, and be implemented as a source of social enrichment. It is concluded that this method can be used to efficiently and practicably reduce rats' fearfulness of humans and improve their welfare, as well as reliably model positive affective states.
ERIC Educational Resources Information Center
McQuillan, Kristin
2009-01-01
Creating research-based expectations and education strategies for all teachers to implement consistently are the beginning steps in giving all students access to standards-based curriculum and in creating readers, writers, and content learners. Reading aloud is one research-based practice that enhances achievement for all students, whether they…
The Benefits of Air and Water Pollution Control: A Review and Synthesis of Recent Estimates (1979)
Report provides a survey and critical review of the existing literature (by late 1970s) giving estimates of national benefits or damages, adopting a common framework to provide consistent estimates of air and water pollution benefits.
78 FR 18376 - Promotional Rates for Global Express Guaranteed Service
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
... POSTAL SERVICE Promotional Rates for Global Express Guaranteed Service AGENCY: Postal Service\\TM\\. ACTION: Notice of Promotional Rates. SUMMARY: The Postal Service gives notice of promotional rates for Global Express Guaranteed[supreg] (GXG[supreg]) service consistent with Governors' Decision No. 12-02...
ERIC Educational Resources Information Center
SOLA, DONALD F.
WRITTEN TO ACCOMPANY THE SPOKEN CUZCO QUECHUA MATERIALS, THIS READER CONSISTS OF SHORT SELECTIONS ACTUALLY RECORDED IN THE FIELD AND REPRESENTING SEVERAL SUBDIALECTS SPOKEN IN RURAL SECTIONS OF THE DEPARTMENT OF CUZCO, PERU. INCLUDED ARE DIALOGS, STORIES, SONGS, CULTURAL SELECTIONS, AND INTERVIEWS. THE FORMAT GIVES THE CUZCO QUECHUA DIALECT AND…
SITE-SPECIFIC PROTOCOL FOR MEASURING SOIL RADON POTENTIALS FOR FLORIDA HOUSES
The report describes a protocol for site-specific measurement of radon potentials for Florida houses that is consistent with existing residential radon protection maps. The protocol gives further guidance on the possible need for radon-protective house construction features. In a...
A model study of tunneling conductance spectra of ferromagnetically ordered manganites
NASA Astrophysics Data System (ADS)
Panda, Saswati; Kar, J. K.; Rout, G. C.
2018-02-01
We report here the interplay of ferromagnetism (FM) and charge density wave (CDW) in manganese oxide systems through the study of tunneling conductance spectra. The model Hamiltonian consists of strong Heisenberg coupling in core t2g band electrons within mean-field approximation giving rise to ferromagnetism. Ferromagnetism is induced in the itinerant eg electrons due to Kubo-Ohata type double exchange (DE) interaction among the t2g and eg electrons. The charge ordering (CO) present in the eg band giving rise to CDW interaction is considered as the extra-mechanism to explain the colossal magnetoresistance (CMR) property of manganites. The magnetic and CDW order parameters are calculated using Zubarev's Green's function technique and solved self-consistently and numerically. The eg electron density of states (DOS) calculated from the imaginary part of the Green's function explains the experimentally observed tunneling conductance spectra. The DOS graph exhibits a parabolic gap near the Fermi energy as observed in tunneling conductance spectra experiments.
General Mechanism of Two-State Protein Folding Kinetics
Rollins, Geoffrey C.; Dill, Ken A.
2016-01-01
We describe here a general model of the kinetic mechanism of protein folding. In the Foldon Funnel Model, proteins fold in units of secondary structures, which form sequentially along the folding pathway, stabilized by tertiary interactions. The model predicts that the free energy landscape has a volcano shape, rather than a simple funnel, that folding is two-state (single-exponential) when secondary structures are intrinsically unstable, and that each structure along the folding path is a transition state for the previous structure. It shows how sequential pathways are consistent with multiple stochastic routes on funnel landscapes, and it gives good agreement with the 9 order of magnitude dependence of folding rates on protein size for a set of 93 proteins, at the same time it is consistent with the near independence of folding equilibrium constant on size. This model gives estimates of folding rates of proteomes, leading to a median folding time in Escherichia coli of about 5 s. PMID:25056406
Parturition date for a given female is highly repeatable within five roe deer populations
Plard, Floriane; Gaillard, Jean-Michel; Bonenfant, Christophe; Hewison, A. J. Mark; Delorme, Daniel; Cargnelutti, Bruno; Kjellander, Petter; Nilsen, Erlend B.; Coulson, Tim
2013-01-01
Births are highly synchronized among females in many mammal populations in temperate areas. Although laying date for a given female is also repeatable within populations of birds, limited evidence suggests low repeatability of parturition date for individual females in mammals, and between-population variability in repeatability has never, to our knowledge, been assessed. We quantified the repeatability of parturition date for individual females in five populations of roe deer, which we found to vary between 0.54 and 0.93. Each year, some females gave birth consistently earlier in the year, whereas others gave birth consistently later. In addition, all females followed the same lifetime trajectory for parturition date, giving birth progressively earlier as they aged. Giving birth early should allow mothers to increase offspring survival, although few females managed to do so. The marked repeatability of parturition date in roe deer females is the highest ever reported for a mammal, suggesting low phenotypic plasticity in this trait. PMID:23234861
Parturition date for a given female is highly repeatable within five roe deer populations.
Plard, Floriane; Gaillard, Jean-Michel; Bonenfant, Christophe; Hewison, A J Mark; Delorme, Daniel; Cargnelutti, Bruno; Kjellander, Petter; Nilsen, Erlend B; Coulson, Tim
2013-02-23
Births are highly synchronized among females in many mammal populations in temperate areas. Although laying date for a given female is also repeatable within populations of birds, limited evidence suggests low repeatability of parturition date for individual females in mammals, and between-population variability in repeatability has never, to our knowledge, been assessed. We quantified the repeatability of parturition date for individual females in five populations of roe deer, which we found to vary between 0.54 and 0.93. Each year, some females gave birth consistently earlier in the year, whereas others gave birth consistently later. In addition, all females followed the same lifetime trajectory for parturition date, giving birth progressively earlier as they aged. Giving birth early should allow mothers to increase offspring survival, although few females managed to do so. The marked repeatability of parturition date in roe deer females is the highest ever reported for a mammal, suggesting low phenotypic plasticity in this trait.
Somasundaram, Karuppanagounder; Ezhilarasan, Kamalanathan
2015-01-01
To develop an automatic skull stripping method for magnetic resonance imaging (MRI) of human head scans. The proposed method is based on gray scale transformation and morphological operations. The proposed method has been tested with 20 volumes of normal T1-weighted images taken from Internet Brain Segmentation Repository. Experimental results show that the proposed method gives better results than the popular skull stripping methods Brain Extraction Tool and Brain Surface Extractor. The average value of Jaccard and Dice coefficients are 0.93 and 0.962 respectively. In this article, we have proposed a novel skull stripping method using intensity transformation and morphological operations. This is a low computational complexity method but gives competitive or better results than that of the popular skull stripping methods Brain Surface Extractor and Brain Extraction Tool.
A Balanced Diaphragm Type of Maximum Cylinder Pressure Indicator
NASA Technical Reports Server (NTRS)
Spanogle, J A; Collins, John H , Jr
1930-01-01
A balanced diaphragm type of maximum cylinder pressure indicator was designed to give results consistent with engine operating conditions. The apparatus consists of a pressure element, a source of controlled high pressure and a neon lamp circuit. The pressure element, which is very compact, permits location of the diaphragm within 1/8 inch of the combustion chamber walls without water cooling. The neon lamp circuit used for indicating contact between the diaphragm and support facilitates the use of the apparatus with multicylinder engines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, G.A.; Lake, L.W.; Sepehrnoori, K.
1987-07-01
This report consists of three parts. Part A describes the development of our chemical flood simulator UTCHEM during the past year, simulation studies, and physical property modelling and experiments. Part B is a report on the optimization and vectorization of UTCHEM on our Cray supercomputer to speed it up. Part C describes our use of UTCHEM to investigate the use of tracers for interwell reservoir tests. Part A of this Annual Report consists of five sections. In the first section, we give a general description of the simulator and recent changes in it along with a test case for amore » slightly compressible fluid. In the second section, we describe the major changes which were needed to add gel and alkaline reactions and give preliminary simulation results for these processes. In the third section, comparisons with a surfactant pilot field test are given. In the fourth section, process scaleup and design simulations are given and also our recent mesh refinement results. In the fifth section, experimental results and associated physical property modelling studies are reported. Part B gives our results on the speedup of UTCHEM on a Cray supercomputer. Depending on the size of the problem, this speedup factor was at least tenfold and resulted from a combination of a faster solver, vectorization, and code optimization. Part C describes our use of UTCHEM for field tracer studies and gives the results of a comparison with field tracer data on the same field (Big Muddy) as was simulated and compared with the surfactant pilot reported in section 3 of Part A. 120 figs., 37 tabs.« less
EAS fluctuation approach to primary mass composition investigation
NASA Technical Reports Server (NTRS)
Stamenov, J. N.; Janminchev, V. D.
1985-01-01
The analysis of muon and electron fluctuation distribution shapes by statistical method of invers problem solution gives the possibility to obtain the relative contribution values of the five main primary nuclei groups. The method is model-independent for a big class of interaction models and can give good results for observation levels not too far from the development maximum and for the selection of showers with fixed sizes and zenith angles not bigger than 30 deg.