Sample records for analytically correct relationship

  1. VOLATILE ORGANIC COMPOUND DETERMINATIONS USING SURROGATE-BASED CORRECTION FOR METHOD AND MATRIX EFFECTS

    EPA Science Inventory

    The principal properties related to analyte recovery in a vacuum distillate are boiling point and relative volatility. The basis for selecting compounds to measure the relationship between these properties and recovery for a vacuum distillation is presented. Surrogates are incorp...

  2. The analyst's participation in the analytic process.

    PubMed

    Levine, H B

    1994-08-01

    The analyst's moment-to-moment participation in the analytic process is inevitably and simultaneously determined by at least three sets of considerations. These are: (1) the application of proper analytic technique; (2) the analyst's personally-motivated responses to the patient and/or the analysis; (3) the analyst's use of him or herself to actualise, via fantasy, feeling or action, some aspect of the patient's conflicts, fantasies or internal object relationships. This formulation has relevance to our view of actualisation and enactment in the analytic process and to our understanding of a series of related issues that are fundamental to our theory of technique. These include the dialectical relationships that exist between insight and action, interpretation and suggestion, empathy and countertransference, and abstinence and gratification. In raising these issues, I do not seek to encourage or endorse wild analysis, the attempt to supply patients with 'corrective emotional experiences' or a rationalisation for acting out one's countertransferences. Rather, it is my hope that if we can better appreciate and describe these important dimensions of the analytic encounter, we can be better prepared to recognise, understand and interpret the continual streams of actualisation and enactment that are embedded in the analytic process. A deeper appreciation of the nature of the analyst's participation in the analytic process and the dimensions of the analytic process to which that participation gives rise may offer us a limited, although important, safeguard against analytic impasse.

  3. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  4. A generic standard additions based method to determine endogenous analyte concentrations by immunoassays to overcome complex biological matrix interference.

    PubMed

    Pang, Susan; Cowen, Simon

    2017-12-13

    We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.

  5. Improved partition equilibrium model for predicting analyte response in electrospray ionization mass spectrometry.

    PubMed

    Du, Lihong; White, Robert L

    2009-02-01

    A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.

  6. Tests of multiplicative models in psychology: a case study using the unified theory of implicit attitudes, stereotypes, self-esteem, and self-concept.

    PubMed

    Blanton, Hart; Jaccard, James

    2006-01-01

    Theories that posit multiplicative relationships between variables are common in psychology. A. G. Greenwald et al. recently presented a theory that explicated relationships between group identification, group attitudes, and self-esteem. Their theory posits a multiplicative relationship between concepts when predicting a criterion variable. Greenwald et al. suggested analytic strategies to test their multiplicative model that researchers might assume are appropriate for testing multiplicative models more generally. The theory and analytic strategies of Greenwald et al. are used as a case study to show the strong measurement assumptions that underlie certain tests of multiplicative models. It is shown that the approach used by Greenwald et al. can lead to declarations of theoretical support when the theory is wrong as well as rejection of the theory when the theory is correct. A simple strategy for testing multiplicative models that makes weaker measurement assumptions than the strategy proposed by Greenwald et al. is suggested and discussed.

  7. Criticisms biologically unwarranted and analytically irrelevant: Reply to Rominger et al.

    USGS Publications Warehouse

    Bender, L.C.; Weisenberger, M.E.

    2009-01-01

    The criticisms of Rominger et al. (2008) of our retrospective analysis of desert bighorn sheep (DBS; Ovis canadensis mexicana) dynamics in the San Andres Mountains of south-central New Mexico, USA, contained many biological errors and analytical oversights. Herein, we show that Rominger et al. (2008) 1) overstated both magnitude and potential effect of predator removal; 2) incorrectly claimed that our total precipitation (TP) model did not fit the data when TP correctly classed ???66 of subsequent population increases and declines (P ??? 0.063); 3) presented a necessary prerequisite of the exponential model (serial correlation between Nt and Nt1) as the key relationship in the DBS data, when it merely reflected that DBS are strongly K-selected and was irrelevant to our hypothesis tests specific to factors affecting the instantaneous rate of population increase (r); 4) greatly oversimplified relationships among precipitation, arid environments, and DBS; and 5) advocated a time for collection of lamb/female (L/F) ratio data that was unrelated to any meaningful period in the biological year of DBS and consequently presented L/F ratio data unrelated to observed dynamics of DBS. In contrast, the L/F ratios used in Bender and Weisenberger (2005) correctly predicted annual changes and were correlated with long-term population rates of change.

  8. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam

    NASA Astrophysics Data System (ADS)

    Marsolat, F.; De Marzi, L.; Pouzoulet, F.; Mazal, A.

    2016-01-01

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm-1. These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.

  9. The limited relevance of analytical ethics to the problems of bioethics.

    PubMed

    Holmes, R L

    1990-04-01

    Philosophical ethics comprises metaethics, normative ethics and applied ethics. These have characteristically received analytic treatment by twentieth-century Anglo-American philosophy. But there has been disagreement over their interrelationship to one another and the relationship of analytical ethics to substantive morality--the making of moral judgments. I contend that the expertise philosophers have in either theoretical or applied ethics does not equip them to make sounder moral judgments on the problems of bioethics than nonphilosophers. One cannot "apply" theories like Kantianism or consequentialism to get solutions to practical moral problems unless one knows which theory is correct, and that is a metaethical question over which there is no consensus. On the other hand, to presume to be able to reach solutions through neutral analysis of problems is unavoidably to beg controversial theoretical issues in the process. Thus, while analytical ethics can play an important clarificatory role in bioethics, it can neither provide, nor substitute for, moral wisdom.

  10. Evaluation of the effect of a health education campaign of HIV by using an analytical hierarchy process method.

    PubMed

    Tan, Xiaodong; Lin, Jianyan; Wang, Fengjie; Luo, Hong; Luo, Lan; Wu, Lei

    2007-09-01

    This study was designed to understand the status of HIV/AIDS knowledge, attitude and practice (KAP) among different populations and to provide scientific evidences for further health education. Three rounds of questionnaires were administered among service industry workers who were selected through stratified cluster sampling. Study subjects included hotel attendants, employees of beauty parlors and service workers of transportation industry. Data were analyzed using the analytical hierarchy process. All demonstrated high KAP overall. Synthetic scoring indexes of the three surveys were above 75%. However, the correct response rate on questions whether mosquito bite can transmit HIV/AIDS and what is the relationship between STD with HIV was unsatisfactory (lower than expected); and their attitudes towards people living with HIV and AIDS need to be improved. Moreover, the effect of health education on these groups was unclear. In conclusion, analytical hierarchy process is a valid method in estimating overall effect of HIV/AIDS health education. Although the present status of HIV/AIDS KAP among the service industry workers was relatively good, greater efforts should be made to improve their HIV transmission knowledge, attitude and understanding of the relationship between STDs and HIV.

  11. Enactment controversies: a critical review of current debates.

    PubMed

    Ivey, Gavin

    2008-02-01

    This critical review of the current disputes concerning countertransference enactment systematically outlines the various issues and the perspectives adopted by the relevant psychoanalytic authors. In the light of this the 'common ground ' hypothesis concerning the unifying influence of contemporary countertransference theory is challenged. While the existence of enactments, minimally defined as the analyst's inadvertent actualization of the patient's transference fantasies, is widely accepted, controversies regarding the specific scope, nature, prevalence, relationship to countertransference experience, impact on the analytic process, role played by the analyst's subjectivity, and the correct handling of enactments abound. Rather than taking a stand based on ideological allegiance to any particular psychoanalytic school or philosophical position, the author argues that the relative merits of contending perspectives is best evaluated with reference to close process scrutiny of the context, manifestation and impact of specific enactments on patients' intrapsychic functioning and the analytic relationship. A detailed account of an interpretative enactment provides a context for the author's position on these debates.

  12. Shell Corrections Stabilizing Superheavy Nuclei and Semi-spheroidal Atomic Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poenaru, Dorin N.

    2008-01-24

    The macroscopic-microscopic method is used to illustrate the shell effect stabilizing superheavy nuclei and to study the stability of semi-spheroidal clusters deposited on planar surfaces. The alpha decay of superheavy nuclei is calculated using three models: the analytical superasymmetric fission model; the universal curve, and the semiempirical formula taking into account the shell effects. Analytical relationships are obtained for the energy levels of the new semi-spheroidal harmonic oscillator (SSHO) single-particle model and for the surface and curvature energies of the semi-spheroidal clusters. The maximum degeneracy of the SSHO is reached at a super-deformed prolate shape for which the minimum ofmore » the liquid drop model energy is also attained.« less

  13. A drift correction optimization technique for the reduction of the inter-measurement dispersion of isotope ratios measured using a multi-collector plasma mass spectrometer

    NASA Astrophysics Data System (ADS)

    Doherty, W.; Lightfoot, P. C.; Ames, D. E.

    2014-08-01

    The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.

  14. Narcissism and Self-Insight: A Review and Meta-Analysis of Narcissists' Self-Enhancement Tendencies.

    PubMed

    Grijalva, Emily; Zhang, Luyao

    2016-01-01

    The current article reviews the narcissism-self-enhancement literature using a multilevel meta-analytic technique. Specifically, we focus on self-insight self-enhancement (i.e., whether narcissists perceive themselves more positively than they are perceived by others); thus, we only include studies that compare narcissists' self-reports to observer reports or objective measures. Results from 171 correlations reported in 36 empirical studies (N = 6,423) revealed that the narcissism-self-enhancement relationship corrected for unreliability in narcissism was .21 (95% confidence interval [CI] = [.17, .25]), and that narcissists tend to self-enhance their agentic characteristics more than their communal characteristics. The average corrected relationship between narcissism and self-enhancement for agentic characteristics was .29 (95% CI = [.25, .33]), whereas for communal characteristics it was .05 (95% CI = [-.01, .10]). In addition, we individually summarized narcissists' self-enhancement for 10 different constructs (i.e., the Big Five, task performance, intelligence, leadership, attractiveness, and likeability). © 2015 by the Society for Personality and Social Psychology, Inc.

  15. Comparison of adjoint and analytical Bayesian inversion methods for constraining Asian sources of carbon monoxide using satellite (MOPITT) measurements of CO columns

    NASA Astrophysics Data System (ADS)

    Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang

    2009-02-01

    We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.

  16. Analyte quantification with comprehensive two-dimensional gas chromatography: assessment of methods for baseline correction, peak delineation, and matrix effect elimination for real samples.

    PubMed

    Samanipour, Saer; Dimitriou-Christidis, Petros; Gros, Jonas; Grange, Aureline; Samuel Arey, J

    2015-01-02

    Comprehensive two-dimensional gas chromatography (GC×GC) is used widely to separate and measure organic chemicals in complex mixtures. However, approaches to quantify analytes in real, complex samples have not been critically assessed. We quantified 7 PAHs in a certified diesel fuel using GC×GC coupled to flame ionization detector (FID), and we quantified 11 target chlorinated hydrocarbons in a lake water extract using GC×GC with electron capture detector (μECD), further confirmed qualitatively by GC×GC with electron capture negative chemical ionization time-of-flight mass spectrometer (ENCI-TOFMS). Target analyte peak volumes were determined using several existing baseline correction algorithms and peak delineation algorithms. Analyte quantifications were conducted using external standards and also using standard additions, enabling us to diagnose matrix effects. We then applied several chemometric tests to these data. We find that the choice of baseline correction algorithm and peak delineation algorithm strongly influence the reproducibility of analyte signal, error of the calibration offset, proportionality of integrated signal response, and accuracy of quantifications. Additionally, the choice of baseline correction and the peak delineation algorithm are essential for correctly discriminating analyte signal from unresolved complex mixture signal, and this is the chief consideration for controlling matrix effects during quantification. The diagnostic approaches presented here provide guidance for analyte quantification using GC×GC. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Almost equivalence of combinatorial and distance processes for discrimination in multielement images.

    PubMed

    Ferraro, M; Foster, D H

    1991-01-01

    Under certain experimental conditions, visual discrimination performance in multielement images is closely related to visual identification performance: elements of the image are distinguished only insofar as they appear to have distinct, discrete, internal characterizations. This report is concerned with the detailed relationship between such internal characterizations and observable discrimination performance. Two types of general processes that might underline discrimination are considered. The first is based on computing all possible internal image characterizations that could allow a correct decision, each characterization weighted by the probability of its occurrence and of a correct decision being made. The second process is based on computing the difference between the probabilities associated with the internal characterizations of the individual image elements, the difference quantified naturally with an l(p) norm. The relationship between the two processes was investigated analytically and by Monte Carlo simulations over a plausible range of numbers n of the internal characterizations of each of the m elements in the image. The predictions of the two processes were found to be closely similar. The relationship was precisely one-to-one, however, only for n = 2, m = 3, 4, 6, and for n greater than 2, m = 3, 4, p = 2. For all other cases tested, a one-to-one relationship was shown to be impossible.

  18. Diagnostic reasoning by hospital pharmacists: assessment of attitudes, knowledge, and skills.

    PubMed

    Chernushkin, Kseniya; Loewen, Peter; de Lemos, Jane; Aulakh, Amneet; Jung, Joanne; Dahri, Karen

    2012-07-01

    Hospital pharmacists participate in activities that may be considered diagnostic. Two reasoning approaches to diagnosis have been described: non-analytic and analytic. Of the 6 analytic traditions, the probabilistic tradition has been shown to improve diagnostic accuracy and reduce unnecessary testing. To the authors' knowledge, pharmacists' attitudes toward having a diagnostic role and their diagnostic knowledge and skills have never been studied. To describe pharmacists' attitudes toward the role of diagnosis in pharmacotherapeutic problem-solving and to characterize the extent of pharmacists' knowledge and skills related to diagnostic literacy. Pharmacists working within Lower Mainland Pharmacy Services (British Columbia) who spent at least 33% of their time in direct patient care were invited to participate in a prospective observational survey. The survey sought information about demographic characteristics and attitudes toward diagnosis. Diagnostic knowledge and skills were tested by means of 3 case scenarios. The analysis included simple descriptive statistics and inferential statistics to evaluate relationships between responses and experience and training. Of 266 pharmacists invited to participate, 94 responded. The attitudes section of the survey was completed by 90 pharmacists; of these, 80 (89%) agreed with the definition of "diagnosis" proposed in the survey, and 83 (92%) agreed that it is important for pharmacists to have diagnosis-related skills. Respondents preferred an analytic to a non-analytic approach to diagnostic decision-making. The probabilistic tradition was not the preferred method in any of the 3 cases. In evaluating 5 clinical scenarios that might require diagnostic skills, on average 84% of respondents agreed that they should be involved in assessing such problems. Respondents' knowledge of and ability to apply probabilistic diagnostic tools were highest for test sensitivity (average of 61% of respondents with the correct answers) and lower for test specificity (average of 48% with correct answers) and likelihood ratios (average of 39% with correct answers). Respondents to this survey strongly believed that diagnostic skills were important for solving drug-related problems, but they demonstrated low levels of knowledge and ability to apply concepts of probabilistic diagnostic reasoning. Opportunities to expand pharmacists' knowledge of diagnostic reasoning exist, and the findings reported here indicate that pharmacists would consider such professional development valuable.

  19. The relevance of psychodynamic psychotherapy to understanding therapist-patient sexual abuse and treatment of survivors.

    PubMed

    Yahav, Rivka; Oz, Sheri

    2006-01-01

    Regardless of the therapy modality, research continues to point to the therapeutic relationship as a major salient factor in clinical success or failure. When a patient is sexually abused by his or her therapist, this therapeutic relationship is cynically exploited in a way that does not properly serve the essential needs of the patient. When this patient then seeks reparative therapy, the subsequent therapist needs to pay close attention to issues of the relationship which were breached by the previous clinician. In this article, two case studies showing very different dynamics will be presented in order to demonstrate: (1) relevant factors related to transference, countertransference, projective identification, and the analytic third pertaining to the former, abusive therapy; and (2) needs versus wishes, and issues related to boundaries and self-disclosure in the corrective therapy.

  20. Comparative Chemometric Analysis for Classification of Acids and Bases via a Colorimetric Sensor Array.

    PubMed

    Kangas, Michael J; Burks, Raychelle M; Atwater, Jordyn; Lukowicz, Rachel M; Garver, Billy; Holmes, Andrea E

    2018-02-01

    With the increasing availability of digital imaging devices, colorimetric sensor arrays are rapidly becoming a simple, yet effective tool for the identification and quantification of various analytes. Colorimetric arrays utilize colorimetric data from many colorimetric sensors, with the multidimensional nature of the resulting data necessitating the use of chemometric analysis. Herein, an 8 sensor colorimetric array was used to analyze select acid and basic samples (0.5 - 10 M) to determine which chemometric methods are best suited for classification quantification of analytes within clusters. PCA, HCA, and LDA were used to visualize the data set. All three methods showed well-separated clusters for each of the acid or base analytes and moderate separation between analyte concentrations, indicating that the sensor array can be used to identify and quantify samples. Furthermore, PCA could be used to determine which sensors showed the most effective analyte identification. LDA, KNN, and HQI were used for identification of analyte and concentration. HQI and KNN could be used to correctly identify the analytes in all cases, while LDA correctly identified 95 of 96 analytes correctly. Additional studies demonstrated that controlling for solvent and image effects was unnecessary for all chemometric methods utilized in this study.

  1. Bridging meta-analysis and the comparative method: a test of seed size effect on germination after frugivores' gut passage.

    PubMed

    Verdú, Miguel; Traveset, Anna

    2004-02-01

    Most studies using meta-analysis try to establish relationships between traits across taxa from interspecific databases and, thus, the phylogenetic relatedness among these taxa should be taken into account to avoid pseudoreplication derived from common ancestry. This paper illustrates, with a representative example of the relationship between seed size and the effect of frugivore's gut on seed germination, that meta-analytic procedures can also be phylogenetically corrected by means of the comparative method. The conclusions obtained in the meta-analytical and phylogenetical approaches are very different. The meta-analysis revealed that the positive effects that gut passage had on seed germination increased with seed size in the case of gut passage through birds whereas decreased in the case of gut passage through non-flying mammals. However, once the phylogenetic relatedness among plant species was taken into account, the effects of gut passage on seed germination did not depend on seed size and were similar between birds and non-flying mammals. Some methodological considerations are given to improve the bridge between the meta-analysis and the comparative method.

  2. Toward a universal carbonate clumped isotope calibration: Diverse synthesis and preparatory methods suggest a single temperature relationship

    NASA Astrophysics Data System (ADS)

    Kelson, Julia R.; Huntington, Katharine W.; Schauer, Andrew J.; Saenger, Casey; Lechler, Alex R.

    2017-01-01

    Carbonate clumped isotope (Δ47) thermometry has been applied to a wide range of problems in earth, ocean and biological sciences over the last decade, but is still plagued by discrepancies among empirical calibrations that show a range of Δ47-temperature sensitivities. The most commonly suggested causes of these discrepancies are the method of mineral precipitation and analytical differences, including the temperature of phosphoric acid used to digest carbonates. However, these mechanisms have yet to be tested in a consistent analytical setting, which makes it difficult to isolate the cause(s) of discrepancies and to evaluate which synthetic calibration is most appropriate for natural samples. Here, we systematically explore the impact of synthetic carbonate precipitation by replicating precipitation experiments of previous workers under a constant analytical setting. We (1) precipitate 56 synthetic carbonates at temperatures of 4-85 °C using different procedures to degas CO2, with and without the use of the enzyme carbonic anhydrase (CA) to promote rapid dissolved inorganic carbon (DIC) equilibration; (2) digest samples in phosphoric acid at both 90 °C and 25 °C; and (3) hold constant all analytical methods including acid preparation, CO2 purification, and mass spectrometry; and (4) reduce our data with 17O corrections that are appropriate for our samples. We find that the CO2 degassing method does not influence Δ47 values of these synthetic carbonates, and therefore probably only influences natural samples with very rapid degassing rates, like speleothems that precipitate out of drip solution with high pCO2. CA in solution does not influence Δ47 values in this work, suggesting that disequilibrium in the DIC pool is negligible. We also find the Δ47 values of samples reacted in 25 and 90 °C acid are within error of each other (once corrected with a constant acid fractionation factor). Taken together, our results show that the Δ47-temperature relationship does not measurably change with either the precipitation methods used in this study or acid digestion temperature. This leaves phosphoric acid preparation, CO2 gas purification, and/or data reduction methods as the possible sources of the discrepancy among published calibrations. In particular, the use of appropriate 17O corrections has the potential to reduce disagreement among calibrations. Our study nearly doubles the available synthetic carbonate calibration data for Δ47 thermometry (adding 56 samples to the 74 previously published samples). This large population size creates a robust calibration that enables us to examine the potential for calibration slope aliasing due to small sample size. The similarity of Δ47 values among carbonates precipitated under such diverse conditions suggests that many natural samples grown at 4-85 °C in moderate pH conditions (6-10) may also be described by our Δ47-temperature relationship.

  3. Thinking Styles and Regret in Physicians.

    PubMed

    Djulbegovic, Mia; Beckstead, Jason; Elqayam, Shira; Reljic, Tea; Kumar, Ambuj; Paidas, Charles; Djulbegovic, Benjamin

    2015-01-01

    Decision-making relies on both analytical and emotional thinking. Cognitive reasoning styles (e.g. maximizing and satisficing tendencies) heavily influence analytical processes, while affective processes are often dependent on regret. The relationship between regret and cognitive reasoning styles has not been well studied in physicians, and is the focus of this paper. A regret questionnaire and 6 scales measuring individual differences in cognitive styles (maximizing-satisficing tendencies; analytical vs. intuitive reasoning; need for cognition; intolerance toward ambiguity; objectivism; and cognitive reflection) were administered through a web-based survey to physicians of the University of South Florida. Bonferroni's adjustment was applied to the overall correlation analysis. The correlation analysis was also performed without Bonferroni's correction, given the strong theoretical rationale indicating the need for a separate hypothesis. We also conducted a multivariate regression analysis to identify the unique influence of predictors on regret. 165 trainees and 56 attending physicians (age range 25 to 69) participated in the survey. After bivariate analysis we found that maximizing tendency positively correlated with regret with respect to both decision difficulty (r=0.673; p<0.001) and alternate search strategy (r=0.239; p=0.002). When Bonferroni's correction was not applied, we also found a negative relationship between satisficing tendency and regret (r=-0.156; p=0.021). In trainees, but not faculty, regret negatively correlated with rational-analytical thinking (r=-0.422; p<0.001), need for cognition (r=-0.340; p<0.001), and objectivism (r=-0.309; p=0.003) and positively correlated with ambiguity intolerance (r=0.285; p=0.012). However, after conducting a multivariate regression analysis, we found that regret was positively associated with maximizing only with respect to decision difficulty (r=0.791; p<0.001), while it was negatively associated with satisficing (r=-0.257; p=0.020) and objectivism (r=-0.267; p=0.034). We found no statistically significant relationship between regret and overall accuracy on conditional inferential tasks. Regret in physicians is strongly associated with their tendency to maximize; i.e. the tendency to consider more choices among abundant options leads to more regret. However, physicians who exhibit satisficing tendency - the inclination to accept a "good enough" solution - feel less regret. Our observation that objectivism is a negative predictor of regret indicates that the tendency to seek and use empirical data in decision-making leads to less regret. Therefore, promotion of evidence-based reasoning may lead to lower regret.

  4. Thinking Styles and Regret in Physicians

    PubMed Central

    Djulbegovic, Mia; Beckstead, Jason; Elqayam, Shira; Reljic, Tea; Kumar, Ambuj; Paidas, Charles; Djulbegovic, Benjamin

    2015-01-01

    Background Decision-making relies on both analytical and emotional thinking. Cognitive reasoning styles (e.g. maximizing and satisficing tendencies) heavily influence analytical processes, while affective processes are often dependent on regret. The relationship between regret and cognitive reasoning styles has not been well studied in physicians, and is the focus of this paper. Methods A regret questionnaire and 6 scales measuring individual differences in cognitive styles (maximizing-satisficing tendencies; analytical vs. intuitive reasoning; need for cognition; intolerance toward ambiguity; objectivism; and cognitive reflection) were administered through a web-based survey to physicians of the University of South Florida. Bonferroni’s adjustment was applied to the overall correlation analysis. The correlation analysis was also performed without Bonferroni’s correction, given the strong theoretical rationale indicating the need for a separate hypothesis. We also conducted a multivariate regression analysis to identify the unique influence of predictors on regret. Results 165 trainees and 56 attending physicians (age range 25 to 69) participated in the survey. After bivariate analysis we found that maximizing tendency positively correlated with regret with respect to both decision difficulty (r=0.673; p<0.001) and alternate search strategy (r=0.239; p=0.002). When Bonferroni’s correction was not applied, we also found a negative relationship between satisficing tendency and regret (r=-0.156; p=0.021). In trainees, but not faculty, regret negatively correlated with rational-analytical thinking (r=-0.422; p<0.001), need for cognition (r=-0.340; p<0.001), and objectivism (r=-0.309; p=0.003) and positively correlated with ambiguity intolerance (r=0.285; p=0.012). However, after conducting a multivariate regression analysis, we found that regret was positively associated with maximizing only with respect to decision difficulty (r=0.791; p<0.001), while it was negatively associated with satisficing (r=-0.257; p=0.020) and objectivism (r=-0.267; p=0.034). We found no statistically significant relationship between regret and overall accuracy on conditional inferential tasks. Conclusion Regret in physicians is strongly associated with their tendency to maximize; i.e. the tendency to consider more choices among abundant options leads to more regret. However, physicians who exhibit satisficing tendency – the inclination to accept a “good enough” solution – feel less regret. Our observation that objectivism is a negative predictor of regret indicates that the tendency to seek and use empirical data in decision-making leads to less regret. Therefore, promotion of evidence-based reasoning may lead to lower regret. PMID:26241650

  5. An improved source model for aircraft interior noise studies

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.; Fuller, C. R.

    1985-01-01

    There is concern that advanced turboprop engines currently being developed may produce excessive aircraft cabin noise levels. This concern has stimulated renewed interest in developing aircraft interior noise reduction methods that do not significantly increase take off weight. An existing analytical model for noise transmission into aircraft cabins was utilized to investigate the behavior of an improved propeller source model for use in aircraft interior noise studies. The new source model, a virtually rotating dipole, is shown to adequately match measured fuselage sound pressure distributions, including the correct phase relationships, for published data. The virtually rotating dipole is used to study the sensitivity of synchrophasing effectiveness to the fuselage sound pressure trace velocity distribution. Results of calculations are presented which reveal the importance of correctly modeling the surface pressure phase relations in synchrophasing and other aircraft interior noise studies.

  6. An improved source model for aircraft interior noise studies

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.; Fuller, C. R.

    1985-01-01

    There is concern that advanced turboprop engines currently being developed may produce excessive aircraft cabin noise level. This concern has stimulated renewed interest in developing aircraft interior noise reduction methods that do not significnatly increase take off weight. An existing analytical model for noise transmission into aircraft cabins was utilized to investigate the behavior of an improved propeller source model for use in aircraft interior noise studies. The new source model, a virtually rotating dipole, is shown to adequately match measured fuselage sound pressure distributions, including the correct phase relationships, for published data. The virtually rotating dipole is used to study the sensitivity of synchrophasing effectiveness to the fuselage sound pressure trace velocity distribution. Results of calculations are presented which reveal the importance of correctly modeling the surface pressure phase relations in synchrophasing and other aircraft interior noise studies.

  7. Determination of boiling point of petrochemicals by gas chromatography-mass spectrometry and multivariate regression analysis of structural activity relationship.

    PubMed

    Fakayode, Sayo O; Mitchell, Breanna S; Pollard, David A

    2014-08-01

    Accurate understanding of analyte boiling points (BP) is of critical importance in gas chromatographic (GC) separation and crude oil refinery operation in petrochemical industries. This study reported the first combined use of GC separation and partial-least-square (PLS1) multivariate regression analysis of petrochemical structural activity relationship (SAR) for accurate BP determination of two commercially available (D3710 and MA VHP) calibration gas mix samples. The results of the BP determination using PLS1 multivariate regression were further compared with the results of traditional simulated distillation method of BP determination. The developed PLS1 regression was able to correctly predict analytes BP in D3710 and MA VHP calibration gas mix samples, with a root-mean-square-%-relative-error (RMS%RE) of 6.4%, and 10.8% respectively. In contrast, the overall RMS%RE of 32.9% and 40.4%, respectively obtained for BP determination in D3710 and MA VHP using a traditional simulated distillation method were approximately four times larger than the corresponding RMS%RE of BP prediction using MRA, demonstrating the better predictive ability of MRA. The reported method is rapid, robust, and promising, and can be potentially used routinely for fast analysis, pattern recognition, and analyte BP determination in petrochemical industries. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Stiffness of frictional contact of dissimilar elastic solids

    DOE PAGES

    Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.; ...

    2017-12-22

    The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This study gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the frictionmore » coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations – adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. Finally, the correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.« less

  9. Stiffness of frictional contact of dissimilar elastic solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.

    The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This study gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the frictionmore » coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations – adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. Finally, the correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.« less

  10. Stiffness of frictional contact of dissimilar elastic solids

    NASA Astrophysics Data System (ADS)

    Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.; Xu, Haitao; Pharr, George M.

    2018-03-01

    The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This paper gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the friction coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations - adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. The correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.

  11. Induction log responses to layered, dipping, and anisotropic formations: Induction log shoulder-bed corrections to anisotropic formations and the effect of shale anisotropy in thinly laminated sand/shale sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagiwara, Teruhiko

    1996-12-31

    Induction log responses to layered, dipping, and anisotropic formations are examined analytically. The analytical model is especially helpful in understanding induction log responses to thinly laminated binary formations, such as sand/shale sequences, that exhibit macroscopically anisotropic: resistivity. Two applications of the analytical model are discussed. In one application we examine special induction log shoulder-bed corrections for use when thin anisotropic beds are encountered. It is known that thinly laminated sand/shale sequences act as macroscopically anisotropic: formations. Hydrocarbon-bearing formations also act as macroscopically anisotropic formations when they consist of alternating layers of different grain-size distributions. When such formations are thick, inductionmore » logs accurately read the macroscopic conductivity, from which the hydrocarbon saturation in the formations can be computed. When the laminated formations are not thick, proper shoulder-bed corrections (or thin-bed corrections) should be applied to obtain the true macroscopic formation conductivity and to estimate the hydrocarbon saturation more accurately. The analytical model is used to calculate the thin-bed effect and to evaluate the shoulder-bed corrections. We will show that the formation resistivity and hence the hydrocarbon saturation are greatly overestimated when the anisotropy effect is not accounted for and conventional shoulder-bed corrections are applied to the log responses from such laminated formations.« less

  12. Ground temperature measurement by PRT-5 for maps experiment

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.

  13. Holographic conductivity of holographic superconductors with higher-order corrections

    NASA Astrophysics Data System (ADS)

    Sheykhi, Ahmad; Ghazanfari, Afsoon; Dehyadegari, Amin

    2018-02-01

    We analytically and numerically disclose the effects of the higher-order correction terms in the gravity and in the gauge field on the properties of s-wave holographic superconductors. On the gravity side, we consider the higher curvature Gauss-Bonnet corrections and on the gauge field side, we add a quadratic correction term to the Maxwell Lagrangian. We show that, for this system, one can still obtain an analytical relation between the critical temperature and the charge density. We also calculate the critical exponent and the condensation value both analytically and numerically. We use a variational method, based on the Sturm-Liouville eigenvalue problem for our analytical study, as well as a numerical shooting method in order to compare with our analytical results. For a fixed value of the Gauss-Bonnet parameter, we observe that the critical temperature decreases with increasing the nonlinearity of the gauge field. This implies that the nonlinear correction term to the Maxwell electrodynamics makes the condensation harder. We also study the holographic conductivity of the system and disclose the effects of the Gauss-Bonnet and nonlinear parameters α and b on the superconducting gap. We observe that, for various values of α and b, the real part of the conductivity is proportional to the frequency per temperature, ω /T, as the frequency is large enough. Besides, the conductivity has a minimum in the imaginary part which is shifted toward greater frequency with decreasing temperature.

  14. High precision analytical description of the allowed β spectrum shape

    NASA Astrophysics Data System (ADS)

    Hayen, Leendert; Severijns, Nathal; Bodek, Kazimierz; Rozpedzik, Dagmara; Mougeot, Xavier

    2018-01-01

    A fully analytical description of the allowed β spectrum shape is given in view of ongoing and planned measurements. Its study forms an invaluable tool in the search for physics beyond the standard electroweak model and the weak magnetism recoil term. Contributions stemming from finite size corrections, mass effects, and radiative corrections are reviewed. Particular focus is placed on atomic and chemical effects, where the existing description is extended and analytically provided. The effects of QCD-induced recoil terms are discussed, and cross-checks were performed for different theoretical formalisms. Special attention was given to a comparison of the treatment of nuclear structure effects in different formalisms. Corrections were derived for both Fermi and Gamow-Teller transitions, and methods of analytical evaluation thoroughly discussed. In its integrated form, calculated f values were in agreement with the most precise numerical results within the aimed for precision. The need for an accurate evaluation of weak magnetism contributions was stressed, and the possible significance of the oft-neglected induced pseudoscalar interaction was noted. Together with improved atomic corrections, an analytical description was presented of the allowed β spectrum shape accurate to a few parts in 10-4 down to 1 keV for low to medium Z nuclei, thereby extending the work by previous authors by nearly an order of magnitude.

  15. Negative refraction angular characterization in one-dimensional photonic crystals.

    PubMed

    Lugo, Jesus Eduardo; Doti, Rafael; Faubert, Jocelyn

    2011-04-06

    Photonic crystals are artificial structures that have periodic dielectric components with different refractive indices. Under certain conditions, they abnormally refract the light, a phenomenon called negative refraction. Here we experimentally characterize negative refraction in a one dimensional photonic crystal structure; near the low frequency edge of the fourth photonic bandgap. We compare the experimental results with current theory and a theory based on the group velocity developed here. We also analytically derived the negative refraction correctness condition that gives the angular region where negative refraction occurs. By using standard photonic techniques we experimentally determined the relationship between incidence and negative refraction angles and found the negative refraction range by applying the correctness condition. In order to compare both theories with experimental results an output refraction correction was utilized. The correction uses Snell's law and an effective refractive index based on two effective dielectric constants. We found good agreement between experiment and both theories in the negative refraction zone. Since both theories and the experimental observations agreed well in the negative refraction region, we can use both negative refraction theories plus the output correction to predict negative refraction angles. This can be very useful from a practical point of view for space filtering applications such as a photonic demultiplexer or for sensing applications.

  16. Negative Refraction Angular Characterization in One-Dimensional Photonic Crystals

    PubMed Central

    Lugo, Jesus Eduardo; Doti, Rafael; Faubert, Jocelyn

    2011-01-01

    Background Photonic crystals are artificial structures that have periodic dielectric components with different refractive indices. Under certain conditions, they abnormally refract the light, a phenomenon called negative refraction. Here we experimentally characterize negative refraction in a one dimensional photonic crystal structure; near the low frequency edge of the fourth photonic bandgap. We compare the experimental results with current theory and a theory based on the group velocity developed here. We also analytically derived the negative refraction correctness condition that gives the angular region where negative refraction occurs. Methodology/Principal Findings By using standard photonic techniques we experimentally determined the relationship between incidence and negative refraction angles and found the negative refraction range by applying the correctness condition. In order to compare both theories with experimental results an output refraction correction was utilized. The correction uses Snell's law and an effective refractive index based on two effective dielectric constants. We found good agreement between experiment and both theories in the negative refraction zone. Conclusions/Significance Since both theories and the experimental observations agreed well in the negative refraction region, we can use both negative refraction theories plus the output correction to predict negative refraction angles. This can be very useful from a practical point of view for space filtering applications such as a photonic demultiplexer or for sensing applications. PMID:21494332

  17. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  18. Competitive evaluation of failure detection algorithms for strapdown redundant inertial instruments

    NASA Technical Reports Server (NTRS)

    Wilcox, J. C.

    1973-01-01

    Algorithms for failure detection, isolation, and correction of redundant inertial instruments in the strapdown dodecahedron configuration are competitively evaluated in a digital computer simulation that subjects them to identical environments. Their performance is compared in terms of orientation and inertial velocity errors and in terms of missed and false alarms. The algorithms appear in the simulation program in modular form, so that they may be readily extracted for use elsewhere. The simulation program and its inputs and outputs are described. The algorithms, along with an eight algorithm that was not simulated, also compared analytically to show the relationships among them.

  19. Challenges for mapping cyanotoxin patterns from remote sensing of cyanobacteria

    USGS Publications Warehouse

    Stumpf, Rick P; Davis, Timothy W.; Wynne, Timothy T.; Graham, Jennifer L.; Loftin, Keith A.; Johengen, T.H.; Gossiaux, D.; Palladino, D.; Burtner, A.

    2016-01-01

    Using satellite imagery to quantify the spatial patterns of cyanobacterial toxins has several challenges. These challenges include the need for surrogate pigments – since cyanotoxins cannot be directly detected by remote sensing, the variability in the relationship between the pigments and cyanotoxins – especially microcystins (MC), and the lack of standardization of the various measurement methods. A dual-model strategy can provide an approach to address these challenges. One model uses either chlorophyll-a (Chl-a) or phycocyanin (PC) collected in situ as a surrogate to estimate the MC concentration. The other uses a remote sensing algorithm to estimate the concentration of the surrogate pigment. Where blooms are mixtures of cyanobacteria and eukaryotic algae, PC should be the preferred surrogate to Chl-a. Where cyanobacteria dominate, Chl-a is a better surrogate than PC for remote sensing. Phycocyanin is less sensitive to detection by optical remote sensing, it is less frequently measured, PC laboratory methods are still not standardized, and PC has greater intracellular variability. Either pigment should not be presumed to have a fixed relationship with MC for any water body. The MC-pigment relationship can be valid over weeks, but have considerable intra- and inter-annual variability due to changes in the amount of MC produced relative to cyanobacterial biomass. To detect pigments by satellite, three classes of algorithms (analytic, semi-analytic, and derivative) have been used. Analytical and semi-analytical algorithms are more sensitive but less robust than derivatives because they depend on accurate atmospheric correction; as a result derivatives are more commonly used. Derivatives can estimate Chl-a concentration, and research suggests they can detect and possibly quantify PC. Derivative algorithms, however, need to be standardized in order to evaluate the reproducibility of parameterizations between lakes. A strategy for producing useful estimates of microcystins from cyanobacterial biomass is described, provided cyanotoxin variability is addressed.

  20. Numerical investigation of finite-volume effects for the HVP

    NASA Astrophysics Data System (ADS)

    Boyle, Peter; Gülpers, Vera; Harrison, James; Jüttner, Andreas; Portelli, Antonin; Sachrajda, Christopher

    2018-03-01

    It is important to correct for finite-volume (FV) effects in the presence of QED, since these effects are typically large due to the long range of the electromagnetic interaction. We recently made the first lattice calculation of electromagnetic corrections to the hadronic vacuum polarisation (HVP). For the HVP, an analytical derivation of FV corrections involves a two-loop calculation which has not yet been carried out. We instead calculate the universal FV corrections numerically, using lattice scalar QED as an effective theory. We show that this method gives agreement with known analytical results for scalar mass FV effects, before applying it to calculate FV corrections for the HVP. This method for numerical calculation of FV effects is also widely applicable to quantities beyond the HVP.

  1. Corrected Four-Sphere Head Model for EEG Signals.

    PubMed

    Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K

    2017-01-01

    The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.

  2. Corrected Four-Sphere Head Model for EEG Signals

    PubMed Central

    Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V.; Dale, Anders M.; Einevoll, Gaute T.; Wójcik, Daniel K.

    2017-01-01

    The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations. PMID:29093671

  3. Participation and performance in INSTAND multi-analyte molecular genetics external quality assessment schemes from 2006 to 2012.

    PubMed

    Maly, Friedrich E; Fried, Roman; Spannagl, Michael

    2014-01-01

    INSTAND e.V. has provided Molecular Genetics Multi-Analyte EQA schemes since 2006. EQA participation and performance were assessed from 2006 - 2012. From 2006 to 2012, the number of analytes in the Multi-Analyte EQA schemes rose from 17 to 53. Total number of results returned rose from 168 in January 2006 to 824 in August 2012. The overall error rate was 1.40 +/- 0.84% (mean +/- SD, N = 24 EQA dates). From 2006 to 2012, no analyte was reported 100% correctly. Individual participant performance was analysed for one common analyte, Lactase (LCT) T-13910C. From 2006 to 2012, 114 laboratories participated in this EQA. Of these, 10 laboratories (8.8%) reported at least one wrong result during the whole observation period. All laboratories reported correct results after their failure incident. In spite of the low overall error rate, EQA will continue to be important for Molecular Genetics.

  4. Influence of Clinical Factors and Magnification Correction on Normal Thickness Profiles of Macular Retinal Layers Using Optical Coherence Tomography.

    PubMed

    Higashide, Tomomi; Ohkubo, Shinji; Hangai, Masanori; Ito, Yasuki; Shimada, Noriaki; Ohno-Matsui, Kyoko; Terasaki, Hiroko; Sugiyama, Kazuhisa; Chew, Paul; Li, Kenneth K W; Yoshimura, Nagahisa

    2016-01-01

    To identify the factors which significantly contribute to the thickness variabilities in macular retinal layers measured by optical coherence tomography with or without magnification correction of analytical areas in normal subjects. The thickness of retinal layers {retinal nerve fiber layer (RNFL), ganglion cell layer plus inner plexiform layer (GCLIPL), RNFL plus GCLIPL (ganglion cell complex, GCC), total retina, total retina minus GCC (outer retina)} were measured by macular scans (RS-3000, NIDEK) in 202 eyes of 202 normal Asian subjects aged 20 to 60 years. The analytical areas were defined by three concentric circles (1-, 3- and 6-mm nominal diameters) with or without magnification correction. For each layer thickness, a semipartial correlation (sr) was calculated for explanatory variables including age, gender, axial length, corneal curvature, and signal strength index. Outer retinal thickness was significantly thinner in females than in males (sr2, 0.07 to 0.13) regardless of analytical areas or magnification correction. Without magnification correction, axial length had a significant positive sr with RNFL (sr2, 0.12 to 0.33) and a negative sr with GCLIPL (sr2, 0.22 to 0.31), GCC (sr2, 0.03 to 0.17), total retina (sr2, 0.07 to 0.17) and outer retina (sr2, 0.16 to 0.29) in multiple analytical areas. The significant sr in RNFL, GCLIPL and GCC became mostly insignificant following magnification correction. The strong correlation between the thickness of inner retinal layers and axial length appeared to result from magnification effects. Outer retinal thickness may differ by gender and axial length independently of magnification correction.

  5. Influence of Clinical Factors and Magnification Correction on Normal Thickness Profiles of Macular Retinal Layers Using Optical Coherence Tomography

    PubMed Central

    Higashide, Tomomi; Ohkubo, Shinji; Hangai, Masanori; Ito, Yasuki; Shimada, Noriaki; Ohno-Matsui, Kyoko; Terasaki, Hiroko; Sugiyama, Kazuhisa; Chew, Paul; Li, Kenneth K. W.; Yoshimura, Nagahisa

    2016-01-01

    Purpose To identify the factors which significantly contribute to the thickness variabilities in macular retinal layers measured by optical coherence tomography with or without magnification correction of analytical areas in normal subjects. Methods The thickness of retinal layers {retinal nerve fiber layer (RNFL), ganglion cell layer plus inner plexiform layer (GCLIPL), RNFL plus GCLIPL (ganglion cell complex, GCC), total retina, total retina minus GCC (outer retina)} were measured by macular scans (RS-3000, NIDEK) in 202 eyes of 202 normal Asian subjects aged 20 to 60 years. The analytical areas were defined by three concentric circles (1-, 3- and 6-mm nominal diameters) with or without magnification correction. For each layer thickness, a semipartial correlation (sr) was calculated for explanatory variables including age, gender, axial length, corneal curvature, and signal strength index. Results Outer retinal thickness was significantly thinner in females than in males (sr2, 0.07 to 0.13) regardless of analytical areas or magnification correction. Without magnification correction, axial length had a significant positive sr with RNFL (sr2, 0.12 to 0.33) and a negative sr with GCLIPL (sr2, 0.22 to 0.31), GCC (sr2, 0.03 to 0.17), total retina (sr2, 0.07 to 0.17) and outer retina (sr2, 0.16 to 0.29) in multiple analytical areas. The significant sr in RNFL, GCLIPL and GCC became mostly insignificant following magnification correction. Conclusions The strong correlation between the thickness of inner retinal layers and axial length appeared to result from magnification effects. Outer retinal thickness may differ by gender and axial length independently of magnification correction. PMID:26814541

  6. Evaluation of Technologies to Complement/Replace Mass Spectrometers in the Tritium Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tovo, L. L.; Lascola, R. J.; Spencer, W. A.

    2005-08-30

    The primary goal of this work is to determine the suitability of the Infraran sensor for use in the Palladium Membrane Reactor. This application presents a challenge for the sensor, since the process temperature exceeds its designed operating range. We have demonstrated that large baseline offsets, comparable to the sensor response to the analyte, are obtained if cool air is blown across the sensor. We have also shown that there is a strong environmental component to the noise. However, the current arrangement does not utilize a reference detector. The strong correlation between the CO and H{sub 2}O sensor responses tomore » environmental changes indicate that a reference detector can greatly reduce the environmental sensitivity. In fact, incorporation of a reference detector is essential for the sensor to work in this application. We have also shown that the two sensor responses are adequately independent. Still, there are several small corrections which must to be made to the sensor response to accommodate chemical and physical effects. Interactions between the two analytes will alter the relationship between number density and pressure. Temperature and pressure broadening will alter the relationship between absorbance and number density. The individual effects are small--on the order of a few percent or less--but cumulatively significant. Still, corrections may be made if temperature and total pressure are independently measured and incorporated into a post-analysis routine. Such corrections are easily programmed and automated and do not represent a significant burden for installation. The measurements and simulations described above indicate that with appropriate corrections, the Infraran sensor can approach the 1-1.5% measurement accuracy required for effective PMR process control. It is also worth noting that the Infraran may be suitable for other gas sensing applications, especially those that do not need to be made in a high-temperature environment. Any gas with an infrared absorption (methane, ammonia, etc.) may be detected so long as an appropriate bandpass filter can be manufactured. Note that homonuclear diatomic molecules (hydrogen and its isotopes, nitrogen, oxygen) do not have infrared absorptions. We have shown that the sensor response may be adequately predicted using commercially available software. Measurement of trace concentrations is limited by the broad spectral bandpass, since the total signal includes non-absorbed frequencies. However, cells with longer pathlengths can be designed to address this problem.« less

  7. Methods for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry

    DOEpatents

    Chan, George C. Y. [Bloomington, IN; Hieftje, Gary M [Bloomington, IN

    2010-08-03

    A method for detecting and correcting inaccurate results in inductively coupled plasma-atomic emission spectrometry (ICP-AES). ICP-AES analysis is performed across a plurality of selected locations in the plasma on an unknown sample, collecting the light intensity at one or more selected wavelengths of one or more sought-for analytes, creating a first dataset. The first dataset is then calibrated with a calibration dataset creating a calibrated first dataset curve. If the calibrated first dataset curve has a variability along the location within the plasma for a selected wavelength, errors are present. Plasma-related errors are then corrected by diluting the unknown sample and performing the same ICP-AES analysis on the diluted unknown sample creating a calibrated second dataset curve (accounting for the dilution) for the one or more sought-for analytes. The cross-over point of the calibrated dataset curves yields the corrected value (free from plasma related errors) for each sought-for analyte.

  8. $ANBA; a rapid, combined data acquisition and correction program for the SEMQ electron microprobe

    USGS Publications Warehouse

    McGee, James J.

    1983-01-01

    $ANBA is a program developed for rapid data acquisition and correction on an automated SEMQ electron microprobe. The program provides increased analytical speed and reduced disk read/write operations compared with the manufacturer's software, resulting in a doubling of analytical throughput. In addition, the program provides enhanced analytical features such as averaging, rapid and compact data storage, and on-line plotting. The program is described with design philosophy, flow charts, variable names, a complete program listing, and system requirements. A complete operating example and notes to assist in running the program are included.

  9. Determination of copper in tap water using solid-phase spectrophotometry

    NASA Technical Reports Server (NTRS)

    Hill, Carol M.; Street, Kenneth W.; Philipp, Warren H.; Tanner, Stephen P.

    1994-01-01

    A new application of ion exchange films is presented. The films are used in a simple analytical method of directly determining low concentrations of Cu(2+) in aqueous solutions, in particular, drinking water. The basis for this new test method is the color and absorption intensity of the ion when adsorbed onto the film. The film takes on the characteristic color of the adsorbed cation, which is concentrated on the film by many orders of magnitude. The linear relationship between absorbance (corrected for variations in film thickness) and solution concentration makes the determinations possible. These determinations agree well with flame atomic absorption determinations.

  10. A comparison of observed and analytically derived remote sensing penetration depths for turbid water

    NASA Technical Reports Server (NTRS)

    Morris, W. D.; Usry, J. W.; Witte, W. G.; Whitlock, C. H.; Guraus, E. A.

    1981-01-01

    The depth to which sunlight will penetrate in turbid waters was investigated. The tests were conducted in water with a single scattering albedo range, and over a range of solar elevation angles. Two different techniques were used to determine the depth of light penetration. It showed little change in the depth of sunlight penetration with changing solar elevation angle. A comparison of the penetration depths indicates that the best agreement between the two methods was achieved when the quasisingle scattering relationship was not corrected for solar angle. It is concluded that sunlight penetration is dependent on inherent water properties only.

  11. An Multivariate Distance-Based Analytic Framework for Connectome-Wide Association Studies

    PubMed Central

    Shehzad, Zarrar; Kelly, Clare; Reiss, Philip T.; Craddock, R. Cameron; Emerson, John W.; McMahon, Katie; Copland, David A.; Castellanos, F. Xavier; Milham, Michael P.

    2014-01-01

    The identification of phenotypic associations in high-dimensional brain connectivity data represents the next frontier in the neuroimaging connectomics era. Exploration of brain-phenotype relationships remains limited by statistical approaches that are computationally intensive, depend on a priori hypotheses, or require stringent correction for multiple comparisons. Here, we propose a computationally efficient, data-driven technique for connectome-wide association studies (CWAS) that provides a comprehensive voxel-wise survey of brain-behavior relationships across the connectome; the approach identifies voxels whose whole-brain connectivity patterns vary significantly with a phenotypic variable. Using resting state fMRI data, we demonstrate the utility of our analytic framework by identifying significant connectivity-phenotype relationships for full-scale IQ and assessing their overlap with existent neuroimaging findings, as synthesized by openly available automated meta-analysis (www.neurosynth.org). The results appeared to be robust to the removal of nuisance covariates (i.e., mean connectivity, global signal, and motion) and varying brain resolution (i.e., voxelwise results are highly similar to results using 800 parcellations). We show that CWAS findings can be used to guide subsequent seed-based correlation analyses. Finally, we demonstrate the applicability of the approach by examining CWAS for three additional datasets, each encompassing a distinct phenotypic variable: neurotypical development, Attention-Deficit/Hyperactivity Disorder diagnostic status, and L-dopa pharmacological manipulation. For each phenotype, our approach to CWAS identified distinct connectome-wide association profiles, not previously attainable in a single study utilizing traditional univariate approaches. As a computationally efficient, extensible, and scalable method, our CWAS framework can accelerate the discovery of brain-behavior relationships in the connectome. PMID:24583255

  12. Strontium isotope measurement of basaltic glasses by laser ablation multiple collector inductively coupled plasma mass spectrometry based on a linear relationship between analytical bias and Rb/Sr ratios.

    PubMed

    Zhang, Le; Ren, Zhong-Yuan; Wu, Ya-Dong; Li, Nan

    2018-01-30

    In situ strontium (Sr) isotope analysis of geological samples by laser ablation multiple collector inductively coupled plasma mass spectrometry (LA-MC-ICP-MS) provides useful information about magma mixing, crustal contamination and crystal residence time. Without chemical separation, during Sr isotope analysis with laser ablation, many kinds of interference ions (such as Rb + and Kr + ) are on the Sr isotope spectrum. Most previous in situ Sr isotope studies only focused on Sr-enriched minerals (e.g. plagioclase, calcite). Here we established a simple method for in situ Sr isotope analysis of basaltic glass with Rb/Sr ratio less than 0.14 by LA-MC-ICP-MS. Seven Faraday cups, on a Neptune Plus MC-ICP-MS instrument, were used to receive the signals on m/z 82, 83, 84, 85, 86, 87 and 88 simultaneously for the Sr isotope analysis of basaltic glass. The isobaric interference of 87 Rb was corrected by the peak stripping method. The instrumental mass fractionation of 87 Sr/ 86 Sr was corrected to 86 Sr/ 88 Sr = 0.1194 with an exponential law. Finally, the residual analytical biases of 87 Sr/ 86 Sr were corrected with a relationship between the deviation of 87 Sr/ 86 Sr from the reference values and the measured 87 Rb/ 86 Sr. The validity of the protocol present here was demonstrated by measuring the Sr isotopes of four basaltic glasses, a plagioclase crystal and a piece of modern coral. The measured 87 Sr/ 86 Sr ratios of all these samples agree within 100 ppm with the reference values. In addition, the Sr isotopes of olivine-hosted melt inclusions from the Emeishan large igneous province (LIP) were measured to show the application of our method to real geological samples. A simple but accurate approach for in situ Sr isotope measurement by LA-MC-ICP-MS has been established, which should greatly facilitate the wider application of in situ Sr isotope geochemistry, especially to volcanic rock studies. Copyright © 2017 John Wiley & Sons, Ltd.

  13. A comparison of two indices for the intraclass correlation coefficient.

    PubMed

    Shieh, Gwowen

    2012-12-01

    In the present study, we examined the behavior of two indices for measuring the intraclass correlation in the one-way random effects model: the prevailing ICC(1) (Fisher, 1938) and the corrected eta-squared (Bliese & Halverson, 1998). These two procedures differ both in their methods of estimating the variance components that define the intraclass correlation coefficient and in their performance of bias and mean squared error in the estimation of the intraclass correlation coefficient. In contrast with the natural unbiased principle used to construct ICC(1), in the present study it was analytically shown that the corrected eta-squared estimator is identical to the maximum likelihood estimator and the pairwise estimator under equal group sizes. Moreover, the empirical results obtained from the present Monte Carlo simulation study across various group structures revealed the mutual dominance relationship between their truncated versions for negative values. The corrected eta-squared estimator performs better than the ICC(1) estimator when the underlying population intraclass correlation coefficient is small. Conversely, ICC(1) has a clear advantage over the corrected eta-squared for medium and large magnitudes of population intraclass correlation coefficient. The conceptual description and numerical investigation provide guidelines to help researchers choose between the two indices for more accurate reliability analysis in multilevel research.

  14. Relationship Between Cybernetics Management and Organizational Trust Among Librarians of Mazandaran University of Medical Sciences.

    PubMed

    Ghiasi, Mitra; Shahrabi, Afsaneh; Siamian, Hasan

    2017-12-01

    Organization must keep current skills, abilities, and in the current field of competition, and move one step ahead of other competitors; for this purpose, must be a high degree of trust inside the organization. Cybernetic management is a new approach in management of organizations that its main task according to internal issues. This study aimed to investigate the relationship between cybernetics management and organizational trust among librarians of Mazandaran University of Medical Sciences. This is applied and analytical survey. which its population included all librarians of Mazandaran University of Medical Sciences, amounting to 42 people which were selected by census and participated in this research. There has no relationship between components of Cybernetics management (participative decision making, commitment, pay equity, Correct flow of information, develop a sense of ownership, online education) with organizational trust amongst librarians of Mazandaran University of Medical Sciences. And there has a significant relationship between flat Structure of cybernetics management and organizational trust. For data analysis was used Kolmogorov-Smirnov test and linear regression. There is no significant relationship between Cybernetic management and organizational trust amongst librarians of Mazandaran University of Medical Sciences.

  15. Relationship Between Cybernetics Management and Organizational Trust Among Librarians of Mazandaran University of Medical Sciences

    PubMed Central

    Ghiasi, Mitra; Shahrabi, Afsaneh; Siamian, Hasan

    2017-01-01

    Background and purpose: Organization must keep current skills, abilities, and in the current field of competition, and move one step ahead of other competitors; for this purpose, must be a high degree of trust inside the organization. Cybernetic management is a new approach in management of organizations that its main task according to internal issues. This study aimed to investigate the relationship between cybernetics management and organizational trust among librarians of Mazandaran University of Medical Sciences. Materials and methods: This is applied and analytical survey. which its population included all librarians of Mazandaran University of Medical Sciences, amounting to 42 people which were selected by census and participated in this research. Results: There has no relationship between components of Cybernetics management (participative decision making, commitment, pay equity, Correct flow of information, develop a sense of ownership, online education) with organizational trust amongst librarians of Mazandaran University of Medical Sciences. And there has a significant relationship between flat Structure of cybernetics management and organizational trust. For data analysis was used Kolmogorov-Smirnov test and linear regression. Conclusion: There is no significant relationship between Cybernetic management and organizational trust amongst librarians of Mazandaran University of Medical Sciences. PMID:29284914

  16. 7 CFR 275.3 - Federal monitoring.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... necessitate long range analytical and evaluative measures for corrective action development shall be... effective. In addition, FNS will examine the State agency's corrective action monitoring and evaluative...

  17. 7 CFR 275.3 - Federal monitoring.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... necessitate long range analytical and evaluative measures for corrective action development shall be... effective. In addition, FNS will examine the State agency's corrective action monitoring and evaluative...

  18. 7 CFR 275.3 - Federal monitoring.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... necessitate long range analytical and evaluative measures for corrective action development shall be... effective. In addition, FNS will examine the State agency's corrective action monitoring and evaluative...

  19. 7 CFR 275.3 - Federal monitoring.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... necessitate long range analytical and evaluative measures for corrective action development shall be... effective. In addition, FNS will examine the State agency's corrective action monitoring and evaluative...

  20. 7 CFR 275.3 - Federal monitoring.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... necessitate long range analytical and evaluative measures for corrective action development shall be... effective. In addition, FNS will examine the State agency's corrective action monitoring and evaluative...

  1. From bead to rod: Comparison of theories by measuring translational drag coefficients of micron-sized magnetic bead-chains in Stokes flow

    PubMed Central

    Lu, Chen; Zhao, Xiaodan; Kawamura, Ryo

    2017-01-01

    Frictional drag force on an object in Stokes flow follows a linear relationship with the velocity of translation and a translational drag coefficient. This drag coefficient is related to the size, shape, and orientation of the object. For rod-like objects, analytical solutions of the drag coefficients have been proposed based on three rough approximations of the rod geometry, namely the bead model, ellipsoid model, and cylinder model. These theories all agree that translational drag coefficients of rod-like objects are functions of the rod length and aspect ratio, but differ among one another on the correction factor terms in the equations. By tracking the displacement of the particles through stationary fluids of calibrated viscosity in magnetic tweezers setup, we experimentally measured the drag coefficients of micron-sized beads and their bead-chain formations with chain length of 2 to 27. We verified our methodology with analytical solutions of dimers of two touching beads, and compared our measured drag coefficient values of rod-like objects with theoretical calculations. Our comparison reveals several analytical solutions that used more appropriate approximation and derived formulae that agree with our measurement better. PMID:29145447

  2. Software Compensates Electronic-Nose Readings for Humidity

    NASA Technical Reports Server (NTRS)

    Zhou, Hanying

    2007-01-01

    A computer program corrects for the effects of humidity on the readouts of an array of chemical sensors (an "electronic nose"). To enable the use of this program, the array must incorporate an independent humidity sensor in addition to sensors designed to detect analytes other than water vapor. The basic principle of the program was described in "Compensating for Effects of Humidity on Electronic Noses" (NPO-30615), NASA Tech Briefs, Vol. 28, No. 6 (June 2004), page 63. To recapitulate: The output of the humidity sensor is used to generate values that are subtracted from the outputs of the other sensors to correct for contributions of humidity to those readings. Hence, in principle, what remains after corrections are the contributions of the analytes only. The outputs of the non-humidity sensors are then deconvolved to obtain the concentrations of the analytes. In addition, the humidity reading is retained as an analyte reading in its own right. This subtraction of the humidity background increases the ability of the software to identify such events as spills in which contaminants may be present in small concentrations and accompanied by large changes in humidity.

  3. Space-charge-limited currents for cathodes with electric field enhanced geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lai, Dingguo, E-mail: laidingguo@nint.ac.cn; Qiu, Mengtong; Xu, Qifu

    This paper presents the approximate analytic solutions of current density for annulus and circle cathodes. The current densities of annulus and circle cathodes are derived approximately from first principles, which are in agreement with simulation results. The large scaling laws can predict current densities of high current vacuum diodes including annulus and circle cathodes in practical applications. In order to discuss the relationship between current density and electric field on cathode surface, the existing analytical solutions of currents for concentric cylinder and sphere diodes are fitted from existing solutions relating with electric field enhancement factors. It is found that themore » space-charge-limited current density for the cathode with electric-field enhanced geometry can be written in a general form of J = g(β{sub E}){sup 2}J{sub 0}, where J{sub 0} is the classical (1D) Child-Langmuir current density, β{sub E} is the electric field enhancement factor, and g is the geometrical correction factor depending on the cathode geometry.« less

  4. Analytical and numerical analysis of frictional damage in quasi brittle materials

    NASA Astrophysics Data System (ADS)

    Zhu, Q. Z.; Zhao, L. Y.; Shao, J. F.

    2016-07-01

    Frictional sliding and crack growth are two main dissipation processes in quasi brittle materials. The frictional sliding along closed cracks is the origin of macroscopic plastic deformation while the crack growth induces a material damage. The main difficulty of modeling is to consider the inherent coupling between these two processes. Various models and associated numerical algorithms have been proposed. But there are so far no analytical solutions even for simple loading paths for the validation of such algorithms. In this paper, we first present a micro-mechanical model taking into account the damage-friction coupling for a large class of quasi brittle materials. The model is formulated by combining a linear homogenization procedure with the Mori-Tanaka scheme and the irreversible thermodynamics framework. As an original contribution, a series of analytical solutions of stress-strain relations are developed for various loading paths. Based on the micro-mechanical model, two numerical integration algorithms are exploited. The first one involves a coupled friction/damage correction scheme, which is consistent with the coupling nature of the constitutive model. The second one contains a friction/damage decoupling scheme with two consecutive steps: the friction correction followed by the damage correction. With the analytical solutions as reference results, the two algorithms are assessed through a series of numerical tests. It is found that the decoupling correction scheme is efficient to guarantee a systematic numerical convergence.

  5. Application of Quantitative Analytical Electron Microscopy to the Mineral Content of Insect Cuticle

    NASA Astrophysics Data System (ADS)

    Rasch, Ron; Cribb, Bronwen W.; Barry, John; Palmer, Christopher M.

    2003-04-01

    Quantification of calcium in the cuticle of the fly larva Exeretonevra angustifrons was undertaken at the micron scale using wavelength dispersive X-ray microanalysis, analytical standards, and a full matrix correction. Calcium and phosphorus were found to be present in the exoskeleton in a ratio that indicates amorphous calcium phosphate. This was confirmed through electron diffraction of the calcium-containing tissue. Due to the pragmatic difficulties of measuring light elements, it is not uncommon in the field of entomology to neglect the use of matrix corrections when performing microanalysis of bulk insect specimens. To determine, firstly, whether such a strategy affects the outcome and secondly, which matrix correction is preferable, phi-rho (z) and ZAF matrix corrections were contrasted with each other and without matrix correction. The best estimate of the mineral phase was found to be given by using the phi-rho (z) correction. When no correction was made, the ratio of Ca to P fell outside the range for amorphous calcium phosphate, possibly leading to flawed interpretation of the mineral form when used on its own.

  6. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    NASA Astrophysics Data System (ADS)

    Izacard, Olivier

    2016-08-01

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basis sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.

  7. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izacard, Olivier, E-mail: izacard@llnl.gov

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. The latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  8. "Robust, replicable, and theoretically-grounded: A response to Brown and Coyne's (2017) commentary on the relationship between emodiversity and health": Correction to Quoidbach et al. (2018).

    PubMed

    2018-04-01

    Reports an error in "Robust, replicable, and theoretically-grounded: A response to Brown and Coyne's (2017) commentary on the relationship between emodiversity and health" by Jordi Quoidbach, Moïra Mikolajczak, June Gruber, Ilios Kotsou, Aleksandr Kogan and Michael I. Norton ( Journal of Experimental Psychology: General , 2018[Mar], Vol 147[3], 451-458). In the article, there is an error in the byline for the first author due to a printer error. The complete, correct institutional affiliation for Jordi Quoidbach is ESADE Business School, Ramon Llull University. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2018-06787-002.) In 2014 in the Journal of Experimental Psychology: General , we reported 2 studies demonstrating that the diversity of emotions that people experience-as measured by the Shannon-Wiener entropy index-was an independent predictor of mental and physical health, over and above the effect of mean levels of emotion. Brown and Coyne (2017) questioned both our use of Shannon's entropy and our analytic approach. We thank Brown and Coyne for their interest in our research; however, both their theoretical and empirical critiques do not undermine the central theoretical tenets and empirical findings of our research. We present an in-depth examination that reveals that our findings are statistically robust, replicable, and reflect a theoretically grounded phenomenon with real-world implications. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Investigating the Effects of Motion Streaks on pQCT-Derived Leg Muscle Density and Its Association With Fractures.

    PubMed

    Chan, Adrian C H; Adachi, Jonathan D; Papaioannou, Alexandra; Wong, Andy Kin On

    Lower peripheral quantitative computed tomography (pQCT)-derived leg muscle density has been associated with fragility fractures in postmenopausal women. Limb movement during image acquisition may result in motion streaks in muscle that could dilute this relationship. This cross-sectional study examined a subset of women from the Canadian Multicentre Osteoporosis Study. pQCT leg scans were qualitatively graded (1-5) for motion severity. Muscle and motion streak were segmented using semi-automated (watershed) and fully automated (threshold-based) methods, computing area, and density. Binary logistic regression evaluated odds ratios (ORs) for fragility or all-cause fractures related to each of these measures with covariate adjustment. Among the 223 women examined (mean age: 72.7 ± 7.1 years, body mass index: 26.30 ± 4.97 kg/m 2 ), muscle density was significantly lower after removing motion (p < 0.001) for both methods. Motion streak areas segmented using the semi-automated method correlated better with visual motion grades (rho = 0.90, p < 0.01) compared to the fully automated method (rho = 0.65, p < 0.01). Although the analysis-reanalysis precision of motion streak area segmentation using the semi-automated method is above 5% error (6.44%), motion-corrected muscle density measures remained well within 2% analytical error. The effect of motion-correction on strengthening the association between muscle density and fragility fractures was significant when motion grade was ≥3 (p interaction <0.05). This observation was most dramatic for the semi-automated algorithm (OR: 1.62 [0.82,3.17] before to 2.19 [1.05,4.59] after correction). Although muscle density showed an overall association with all-cause fractures (OR: 1.49 [1.05,2.12]), the effect of motion-correction was again, most impactful within individuals with scans showing grade 3 or above motion. Correcting for motion in pQCT leg scans strengthened the relationship between muscle density and fragility fractures, particularly in scans with motion grades of 3 or above. Motion streaks are not confounders to the relationship between pQCT-derived leg muscle density and fractures, but may introduce heterogeneity in muscle density measurements, rendering associations with fractures to be weaker. Copyright © 2016. Published by Elsevier Inc.

  10. Automatic-heuristic and executive-analytic processing during reasoning: Chronometric and dual-task considerations.

    PubMed

    De Neys, Wim

    2006-06-01

    Human reasoning has been shown to overly rely on intuitive, heuristic processing instead of a more demanding analytic inference process. Four experiments tested the central claim of current dual-process theories that analytic operations involve time-consuming executive processing whereas the heuristic system would operate automatically. Participants solved conjunction fallacy problems and indicative and deontic selection tasks. Experiment 1 established that making correct analytic inferences demanded more processing time than did making heuristic inferences. Experiment 2 showed that burdening the executive resources with an attention-demanding secondary task decreased correct, analytic responding and boosted the rate of conjunction fallacies and indicative matching card selections. Results were replicated in Experiments 3 and 4 with a different secondary-task procedure. Involvement of executive resources for the deontic selection task was less clear. Findings validate basic processing assumptions of the dual-process framework and complete the correlational research programme of K. E. Stanovich and R. F. West (2000).

  11. [Spectral scatter correction of coal samples based on quasi-linear local weighted method].

    PubMed

    Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng

    2014-07-01

    The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.

  12. Correction for isotopic interferences between analyte and internal standard in quantitative mass spectrometry by a nonlinear calibration function.

    PubMed

    Rule, Geoffrey S; Clark, Zlatuse D; Yue, Bingfang; Rockwood, Alan L

    2013-04-16

    Stable isotope-labeled internal standards are of great utility in providing accurate quantitation in mass spectrometry (MS). An implicit assumption has been that there is no "cross talk" between signals of the internal standard and the target analyte. In some cases, however, naturally occurring isotopes of the analyte do contribute to the signal of the internal standard. This phenomenon becomes more pronounced for isotopically rich compounds, such as those containing sulfur, chlorine, or bromine, higher molecular weight compounds, and those at high analyte/internal standard concentration ratio. This can create nonlinear calibration behavior that may bias quantitative results. Here, we propose the use of a nonlinear but more accurate fitting of data for these situations that incorporates one or two constants determined experimentally for each analyte/internal standard combination and an adjustable calibration parameter. This fitting provides more accurate quantitation in MS-based assays where contributions from analyte to stable labeled internal standard signal exist. It can also correct for the reverse situation where an analyte is present in the internal standard as an impurity. The practical utility of this approach is described, and by using experimental data, the approach is compared to alternative fits.

  13. Ab Initio and Analytic Intermolecular Potentials for Ar-CF₄

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vayner, Grigoriy; Alexeev, Yuri; Wang, Jiangping

    2006-03-09

    Ab initio calculations at the CCSD(T) level of theory are performed to characterize the Ar + CF ₄ intermolecular potential. Extensive calculations, with and without a correction for basis set superposition error (BSSE), are performed with the cc-pVTZ basis set. Additional calculations are performed with other correlation consistent (cc) basis sets to extrapolate the Ar---CF₄potential energy minimum to the complete basis set (CBS) limit. Both the size of the basis set and BSSE have substantial effects on the Ar + CF₄ potential. Calculations with the cc-pVTZ basis set and without a BSSE correction, appear to give a good representation ofmore » the potential at the CBS limit and with a BSSE correction. In addition, MP2 theory is found to give potential energies in very good agreement with those determined by the much higher level CCSD(T) theory. Two analytic potential energy functions were determined for Ar + CF₄by fitting the cc-pVTZ calculations both with and without a BSSE correction. These analytic functions were written as a sum of two body potentials and excellent fits to the ab initio potentials were obtained by representing each two body interaction as a Buckingham potential.« less

  14. Attachment history as a moderator of the alliance outcome relationship in adolescents.

    PubMed

    Zack, Sanno E; Castonguay, Louis G; Boswell, James F; McAleavey, Andrew A; Adelman, Robert; Kraus, David R; Pate, George A

    2015-06-01

    The role of the alliance in predicting treatment outcome is robust and long established. However, much less attention has been paid to mechanisms of change, including moderators, particularly for youth. This study examined the moderating role of pretreatment adolescent-caregiver attachment and its impact on the working alliance-treatment outcome relationship. One hundred adolescents and young adults with primary substance dependence disorders were treated at a residential facility, with a cognitive-behavioral emphasis. The working alliance and clinical symptoms were measured at regular intervals throughout treatment. A moderator hypothesis was tested using a path analytic approach. Findings suggested that attachment to the primary caregiver moderated the impact of the working alliance on treatment outcome, such that for youth with the poorest attachment history, working alliance had a stronger relationship with outcome. Conversely, for those with the strongest attachment histories, alliance was not a significant predictor of symptom reduction. This finding may help elucidate alliance-related mechanisms of change, lending support for theories of corrective emotional experience as one function of the working alliance in youth psychotherapy. (c) 2015 APA, all rights reserved).

  15. Systems 1 and 2 thinking processes and cognitive reflection testing in medical students

    PubMed Central

    Tay, Shu Wen; Ryan, Paul; Ryan, C Anthony

    2016-01-01

    Background Diagnostic decision-making is made through a combination of Systems 1 (intuition or pattern-recognition) and Systems 2 (analytic) thinking. The purpose of this study was to use the Cognitive Reflection Test (CRT) to evaluate and compare the level of Systems 1 and 2 thinking among medical students in pre-clinical and clinical programs. Methods The CRT is a three-question test designed to measure the ability of respondents to activate metacognitive processes and switch to System 2 (analytic) thinking where System 1 (intuitive) thinking would lead them astray. Each CRT question has a correct analytical (System 2) answer and an incorrect intuitive (System 1) answer. A group of medical students in Years 2 & 3 (pre-clinical) and Years 4 (in clinical practice) of a 5-year medical degree were studied. Results Ten percent (13/128) of students had the intuitive answers to the three questions (suggesting they generally relied on System 1 thinking) while almost half (44%) answered all three correctly (indicating full analytical, System 2 thinking). Only 3–13% had incorrect answers (i.e. that were neither the analytical nor the intuitive responses). Non-native English speaking students (n = 11) had a lower mean number of correct answers compared to native English speakers (n = 117: 1.0 s 2.12 respectfully: p < 0.01). As students progressed through questions 1 to 3, the percentage of correct System 2 answers increased and the percentage of intuitive answers decreased in both the pre-clinical and clinical students. Conclusions Up to half of the medical students demonstrated full or partial reliance on System 1 (intuitive) thinking in response to these analytical questions. While their CRT performance has no claims to make as to their future expertise as clinicians, the test may be used in helping students to understand the importance of awareness and regulation of their thinking processes in clinical practice. PMID:28344696

  16. Systems 1 and 2 thinking processes and cognitive reflection testing in medical students.

    PubMed

    Tay, Shu Wen; Ryan, Paul; Ryan, C Anthony

    2016-10-01

    Diagnostic decision-making is made through a combination of Systems 1 (intuition or pattern-recognition) and Systems 2 (analytic) thinking. The purpose of this study was to use the Cognitive Reflection Test (CRT) to evaluate and compare the level of Systems 1 and 2 thinking among medical students in pre-clinical and clinical programs. The CRT is a three-question test designed to measure the ability of respondents to activate metacognitive processes and switch to System 2 (analytic) thinking where System 1 (intuitive) thinking would lead them astray. Each CRT question has a correct analytical (System 2) answer and an incorrect intuitive (System 1) answer. A group of medical students in Years 2 & 3 (pre-clinical) and Years 4 (in clinical practice) of a 5-year medical degree were studied. Ten percent (13/128) of students had the intuitive answers to the three questions (suggesting they generally relied on System 1 thinking) while almost half (44%) answered all three correctly (indicating full analytical, System 2 thinking). Only 3-13% had incorrect answers (i.e. that were neither the analytical nor the intuitive responses). Non-native English speaking students (n = 11) had a lower mean number of correct answers compared to native English speakers (n = 117: 1.0 s 2.12 respectfully: p < 0.01). As students progressed through questions 1 to 3, the percentage of correct System 2 answers increased and the percentage of intuitive answers decreased in both the pre-clinical and clinical students. Up to half of the medical students demonstrated full or partial reliance on System 1 (intuitive) thinking in response to these analytical questions. While their CRT performance has no claims to make as to their future expertise as clinicians, the test may be used in helping students to understand the importance of awareness and regulation of their thinking processes in clinical practice.

  17. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    NASA Astrophysics Data System (ADS)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification, and (3) thermal optical reflectance (TOR) organic carbon (OC) and elemental carbon (EC) predictions. The discrepancy rate for a four-cluster solution is 10 %. For all functional groups but carboxylic COH the discrepancy is ≤ 10 %. Performance metrics obtained from TOR OC and EC predictions (R2 ≥ 0.94 %, bias ≤ 0.01 µg m-3, and error ≤ 0.04 µg m-3) are on a par with those obtained from uncorrected and PB-corrected spectra. The proposed protocol leads to visually and analytically similar estimates as those generated by the polynomial method. More importantly, the automated solution allows us and future users to evaluate its analytical reproducibility while minimizing reducible user bias. We anticipate the protocol will enable FT-IR researchers and data analysts to quickly and reliably analyze a large amount of data and connect them to a variety of available statistical learning methods to be applied to analyte absorbances isolated in atmospheric aerosol samples.

  18. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  19. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist

    NASA Astrophysics Data System (ADS)

    Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  20. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist.

    PubMed

    Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  1. (U) An Analytic Examination of Piezoelectric Ejecta Mass Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, Ian Lee

    2017-02-02

    Ongoing efforts to validate a Richtmyer-Meshkov instability (RMI) based ejecta source model [1, 2, 3] in LANL ASC codes use ejecta areal masses derived from piezoelectric sensor data [4, 5, 6]. However, the standard technique for inferring masses from sensor voltages implicitly assumes instantaneous ejecta creation [7], which is not a feature of the RMI source model. To investigate the impact of this discrepancy, we define separate “areal mass functions” (AMFs) at the source and sensor in terms of typically unknown distribution functions for the ejecta particles, and derive an analytic relationship between them. Then, for the case of single-shockmore » ejection into vacuum, we use the AMFs to compare the analytic (or “true”) accumulated mass at the sensor with the value that would be inferred from piezoelectric voltage measurements. We confirm the inferred mass is correct when creation is instantaneous, and furthermore prove that when creation is not instantaneous, the inferred values will always overestimate the true mass. Finally, we derive an upper bound for the error imposed on a perfect system by the assumption of instantaneous ejecta creation. When applied to shots in the published literature, this bound is frequently less than several percent. Errors exceeding 15% may require velocities or timescales at odds with experimental observations.« less

  2. The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1981-01-01

    Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.

  3. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less

  4. Gluon fragmentation into quarkonium at next-to-leading order

    DOE PAGES

    Artoisenet, Pierre; Braaten, Eric

    2015-04-22

    Here, we present the first calculation at next-to-leading order (NLO) in α s of a fragmentation function into quarkonium whose form at leading order is a nontrivial function of z, namely the fragmentation function for a gluon into a spin-singlet S-wave state at leading order in the relative velocity. To calculate the real NLO corrections, we introduce a new subtraction scheme that allows the phase-space integrals to be evaluated in 4 dimensions. We extract all ultraviolet and infrared divergences in the real NLO corrections analytically by calculating the phase-space integrals of the subtraction terms in 4 – 2ϵ dimensions. Wemore » also extract the divergences in the virtual NLO corrections analytically, and detail the cancellation of all divergences after renormalization. The NLO corrections have a dramatic effect on the shape of the fragmentation function, and they significantly increase the fragmentation probability.« less

  5. Estimation of surface temperature in remote pollution measurement experiments

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  7. Analytical relationships for prediction of the mechanical properties of additively manufactured porous biomaterials

    PubMed Central

    Hedayati, Reza

    2016-01-01

    Abstract Recent developments in additive manufacturing techniques have motivated an increasing number of researchers to study regular porous biomaterials that are based on repeating unit cells. The physical and mechanical properties of such porous biomaterials have therefore received increasing attention during recent years. One of the areas that have revived is analytical study of the mechanical behavior of regular porous biomaterials with the aim of deriving analytical relationships that could predict the relative density and mechanical properties of porous biomaterials, given the design and dimensions of their repeating unit cells. In this article, we review the analytical relationships that have been presented in the literature for predicting the relative density, elastic modulus, Poisson's ratio, yield stress, and buckling limit of regular porous structures based on various types of unit cells. The reviewed analytical relationships are used to compare the mechanical properties of porous biomaterials based on different types of unit cells. The major areas where the analytical relationships have improved during the recent years are discussed and suggestions are made for future research directions. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 104A: 3164–3174, 2016. PMID:27502358

  8. Correcting for Indirect Range Restriction in Meta-Analysis: Testing a New Meta-Analytic Procedure

    ERIC Educational Resources Information Center

    Le, Huy; Schmidt, Frank L.

    2006-01-01

    Using computer simulation, the authors assessed the accuracy of J. E. Hunter, F. L. Schmidt, and H. Le's (2006) procedure for correcting for indirect range restriction, the most common type of range restriction, in comparison with the conventional practice of applying the Thorndike Case II correction for direct range restriction. Hunter et…

  9. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    DOE PAGES

    Izacard, Olivier

    2016-08-02

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  10. Fall Velocities of Hydrometeors in the Atmosphere: Refinements to a Continuous Analytical Power Law.

    NASA Astrophysics Data System (ADS)

    Khvorostyanov, Vitaly I.; Curry, Judith A.

    2005-12-01

    This paper extends the previous research of the authors on the unified representation of fall velocities for both liquid and crystalline particles as a power law over the entire size range of hydrometeors observed in the atmosphere. The power-law coefficients are determined as continuous analytical functions of the Best or Reynolds number or of the particle size. Here, analytical expressions are formulated for the turbulent corrections to the Reynolds number and to the power-law coefficients that describe the continuous transition from the laminar to the turbulent flow around a falling particle. A simple analytical expression is found for the correction of fall velocities for temperature and pressure. These expressions and the resulting fall velocities are compared with observations and other calculations for a range of ice crystal habits and sizes. This approach provides a continuous analytical power-law description of the terminal velocities of liquid and crystalline hydrometeors with sufficiently high accuracy and can be directly used in bin-resolving models or incorporated into parameterizations for cloud- and large-scale models and remote sensing techniques.

  11. Characterization of DBD Plasma Actuators Performance without External Flow . Part I; Thrust-Voltage Quadratic Relationship in Logarithmic Space for Sinusoidal Excitation

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.

    2016-01-01

    We present results of thrust measurements of Dielectric Barrier Discharge (DBD) plasma actuators. We have used a test setup, measurement, and data processing methodology that we developed in prior work. The tests were conducted with High Density Polyethylene (HDPE) actuators of three thicknesses. The applied voltage driving the actuators was a pure sinusoidal waveform. The test setup was suspended actuators with a partial liquid interface. The tests were conducted at low ambient humidity. The thrust was measured with an analytical balance and the results were corrected for anti-thrust to isolate the plasma generated thrust. Applying this approach resulted in smooth and repeatable data. It also enabled curve fitting that yielded quadratic relations between the plasma thrust and voltage in log-log space at constant frequencies. The results contrast power law relationships developed in literature that appear to be a rough approximation over a limited voltage range.

  12. Association of tooth brushing behavior with oral hygiene index among students using fixed appliance

    NASA Astrophysics Data System (ADS)

    Ria, N.; Eyanoer, P.

    2018-03-01

    Uses of fixed appliance have become popular recently. The purpose of its use is to correct malposition of teeth in order to normalize the masticatory function and to eliminate the accumulation of food remain between the teeth. These will prevent the formation of caries and any periodontal tissue disease. Fixed appliance patients must routinely maintain their oral hygiene. This study was an analytical survey with cross-sectional design to know the relationship between behavior in tooth brushing of students using thefixed appliance and oral hygiene in Poltekkes Kemenkes Medan. The average of Oral Hygiene Index – Simplified (OHI-S) value of students using fixed appliance (2.68) was still above national target which is ≤2, and there was a relationship between behavior in tooth brushing of students using the fixed appliance and oral hygiene (p<0.02). In conclusion, to get good oral hygiene and to prevent caries formation and periodontal disease patients using fixed appliances should maintain their dental health.

  13. Comparison of analytical and numerical approaches for CT-based aberration correction in transcranial passive acoustic imaging

    NASA Astrophysics Data System (ADS)

    Jones, Ryan M.; Hynynen, Kullervo

    2016-01-01

    Computed tomography (CT)-based aberration corrections are employed in transcranial ultrasound both for therapy and imaging. In this study, analytical and numerical approaches for calculating aberration corrections based on CT data were compared, with a particular focus on their application to transcranial passive imaging. Two models were investigated: a three-dimensional full-wave numerical model (Connor and Hynynen 2004 IEEE Trans. Biomed. Eng. 51 1693-706) based on the Westervelt equation, and an analytical method (Clement and Hynynen 2002 Ultrasound Med. Biol. 28 617-24) similar to that currently employed by commercial brain therapy systems. Trans-skull time delay corrections calculated from each model were applied to data acquired by a sparse hemispherical (30 cm diameter) receiver array (128 piezoceramic discs: 2.5 mm diameter, 612 kHz center frequency) passively listening through ex vivo human skullcaps (n  =  4) to emissions from a narrow-band, fixed source emitter (1 mm diameter, 516 kHz center frequency). Measurements were taken at various locations within the cranial cavity by moving the source around the field using a three-axis positioning system. Images generated through passive beamforming using CT-based skull corrections were compared with those obtained through an invasive source-based approach, as well as images formed without skull corrections, using the main lobe volume, positional shift, peak sidelobe ratio, and image signal-to-noise ratio as metrics for image quality. For each CT-based model, corrections achieved by allowing for heterogeneous skull acoustical parameters in simulation outperformed the corresponding case where homogeneous parameters were assumed. Of the CT-based methods investigated, the full-wave model provided the best imaging results at the cost of computational complexity. These results highlight the importance of accurately modeling trans-skull propagation when calculating CT-based aberration corrections. Although presented in an imaging context, our results may also be applicable to the problem of transmit focusing through the skull.

  14. Consistency of FMEA used in the validation of analytical procedures.

    PubMed

    Oldenhof, M T; van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Vredenbregt, M J; Weda, M; Barends, D M

    2011-02-20

    In order to explore the consistency of the outcome of a Failure Mode and Effects Analysis (FMEA) in the validation of analytical procedures, an FMEA was carried out by two different teams. The two teams applied two separate FMEAs to a High Performance Liquid Chromatography-Diode Array Detection-Mass Spectrometry (HPLC-DAD-MS) analytical procedure used in the quality control of medicines. Each team was free to define their own ranking scales for the probability of severity (S), occurrence (O), and detection (D) of failure modes. We calculated Risk Priority Numbers (RPNs) and we identified the failure modes above the 90th percentile of RPN values as failure modes needing urgent corrective action; failure modes falling between the 75th and 90th percentile of RPN values were identified as failure modes needing necessary corrective action, respectively. Team 1 and Team 2 identified five and six failure modes needing urgent corrective action respectively, with two being commonly identified. Of the failure modes needing necessary corrective actions, about a third were commonly identified by both teams. These results show inconsistency in the outcome of the FMEA. To improve consistency, we recommend that FMEA is always carried out under the supervision of an experienced FMEA-facilitator and that the FMEA team has at least two members with competence in the analytical method to be validated. However, the FMEAs of both teams contained valuable information that was not identified by the other team, indicating that this inconsistency is not always a drawback. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Intensity correction for multichannel hyperpolarized 13C imaging of the heart.

    PubMed

    Dominguez-Viqueira, William; Geraghty, Benjamin J; Lau, Justin Y C; Robb, Fraser J; Chen, Albert P; Cunningham, Charles H

    2016-02-01

    Develop and test an analytic correction method to correct the signal intensity variation caused by the inhomogeneous reception profile of an eight-channel phased array for hyperpolarized (13) C imaging. Fiducial markers visible in anatomical images were attached to the individual coils to provide three dimensional localization of the receive hardware with respect to the image frame of reference. The coil locations and dimensions were used to numerically model the reception profile using the Biot-Savart Law. The accuracy of the coil sensitivity estimation was validated with images derived from a homogenous (13) C phantom. Numerical coil sensitivity estimates were used to perform intensity correction of in vivo hyperpolarized (13) C cardiac images in pigs. In comparison to the conventional sum-of-squares reconstruction, improved signal uniformity was observed in the corrected images. The analytical intensity correction scheme was shown to improve the uniformity of multichannel image reconstruction in hyperpolarized [1-(13) C]pyruvate and (13) C-bicarbonate cardiac MRI. The method is independent of the pulse sequence used for (13) C data acquisition, simple to implement and does not require additional scan time, making it an attractive technique for multichannel hyperpolarized (13) C MRI. © 2015 Wiley Periodicals, Inc.

  16. N -jettiness subtractions for g g →H at subleading power

    NASA Astrophysics Data System (ADS)

    Moult, Ian; Rothen, Lorena; Stewart, Iain W.; Tackmann, Frank J.; Zhu, Hua Xing

    2018-01-01

    N -jettiness subtractions provide a general approach for performing fully-differential next-to-next-to-leading order (NNLO) calculations. Since they are based on the physical resolution variable N -jettiness, TN , subleading power corrections in τ =TN/Q , with Q a hard interaction scale, can also be systematically computed. We study the structure of power corrections for 0-jettiness, T0, for the g g →H process. Using the soft-collinear effective theory we analytically compute the leading power corrections αsτ ln τ and αs2τ ln3τ (finding partial agreement with a previous result in the literature), and perform a detailed numerical study of the power corrections in the g g , g q , and q q ¯ channels. This includes a numerical extraction of the αsτ and αs2τ ln2τ corrections, and a study of the dependence on the T0 definition. Including such power suppressed logarithms significantly reduces the size of missing power corrections, and hence improves the numerical efficiency of the subtraction method. Having a more detailed understanding of the power corrections for both q q ¯ and g g initiated processes also provides insight into their universality, and hence their behavior in more complicated processes where they have not yet been analytically calculated.

  17. Revisiting the Relationship between Individual Differences in Analytic Thinking and Religious Belief: Evidence That Measurement Order Moderates Their Inverse Correlation.

    PubMed

    Finley, Anna J; Tang, David; Schmeichel, Brandon J

    2015-01-01

    Prior research has found that persons who favor more analytic modes of thought are less religious. We propose that individual differences in analytic thought are associated with reduced religious beliefs particularly when analytic thought is measured (hence, primed) first. The current study provides a direct replication of prior evidence that individual differences in analytic thinking are negatively related to religious beliefs when analytic thought is measured before religious beliefs. When religious belief is measured before analytic thinking, however, the negative relationship is reduced to non-significance, suggesting that the link between analytic thought and religious belief is more tenuous than previously reported. The current study suggests that whereas inducing analytic processing may reduce religious belief, more analytic thinkers are not necessarily less religious. The potential for measurement order to inflate the inverse correlation between analytic thinking and religious beliefs deserves additional consideration.

  18. Revisiting the Relationship between Individual Differences in Analytic Thinking and Religious Belief: Evidence That Measurement Order Moderates Their Inverse Correlation

    PubMed Central

    Finley, Anna J.; Tang, David; Schmeichel, Brandon J.

    2015-01-01

    Prior research has found that persons who favor more analytic modes of thought are less religious. We propose that individual differences in analytic thought are associated with reduced religious beliefs particularly when analytic thought is measured (hence, primed) first. The current study provides a direct replication of prior evidence that individual differences in analytic thinking are negatively related to religious beliefs when analytic thought is measured before religious beliefs. When religious belief is measured before analytic thinking, however, the negative relationship is reduced to non-significance, suggesting that the link between analytic thought and religious belief is more tenuous than previously reported. The current study suggests that whereas inducing analytic processing may reduce religious belief, more analytic thinkers are not necessarily less religious. The potential for measurement order to inflate the inverse correlation between analytic thinking and religious beliefs deserves additional consideration. PMID:26402334

  19. Analytical relationships for prediction of the mechanical properties of additively manufactured porous biomaterials.

    PubMed

    Zadpoor, Amir Abbas; Hedayati, Reza

    2016-12-01

    Recent developments in additive manufacturing techniques have motivated an increasing number of researchers to study regular porous biomaterials that are based on repeating unit cells. The physical and mechanical properties of such porous biomaterials have therefore received increasing attention during recent years. One of the areas that have revived is analytical study of the mechanical behavior of regular porous biomaterials with the aim of deriving analytical relationships that could predict the relative density and mechanical properties of porous biomaterials, given the design and dimensions of their repeating unit cells. In this article, we review the analytical relationships that have been presented in the literature for predicting the relative density, elastic modulus, Poisson's ratio, yield stress, and buckling limit of regular porous structures based on various types of unit cells. The reviewed analytical relationships are used to compare the mechanical properties of porous biomaterials based on different types of unit cells. The major areas where the analytical relationships have improved during the recent years are discussed and suggestions are made for future research directions. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 104A: 3164-3174, 2016. © 2016 The Authors Journal of Biomedical Materials Research Part A Published by Wiley Periodicals, Inc.

  20. Orthopositronium Lifetime: Analytic Results in O({alpha}) and O({alpha}{sup 3}ln{alpha})

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kniehl, Bernd A.; Kotikov, Anatoly V.; Veretin, Oleg L.

    2008-11-07

    We present the O({alpha}) and O({alpha}{sup 3}ln{alpha}) corrections to the total decay width of orthopositronium in closed analytic form, in terms of basic irrational numbers, which can be evaluated numerically to arbitrary precision.

  1. Elucidation of molecular kinetic schemes from macroscopic traces using system identification

    PubMed Central

    González-Maeso, Javier; Sealfon, Stuart C.; Galocha-Iragüen, Belén; Brezina, Vladimir

    2017-01-01

    Overall cellular responses to biologically-relevant stimuli are mediated by networks of simpler lower-level processes. Although information about some of these processes can now be obtained by visualizing and recording events at the molecular level, this is still possible only in especially favorable cases. Therefore the development of methods to extract the dynamics and relationships between the different lower-level (microscopic) processes from the overall (macroscopic) response remains a crucial challenge in the understanding of many aspects of physiology. Here we have devised a hybrid computational-analytical method to accomplish this task, the SYStems-based MOLecular kinetic scheme Extractor (SYSMOLE). SYSMOLE utilizes system-identification input-output analysis to obtain a transfer function between the stimulus and the overall cellular response in the Laplace-transformed domain. It then derives a Markov-chain state molecular kinetic scheme uniquely associated with the transfer function by means of a classification procedure and an analytical step that imposes general biological constraints. We first tested SYSMOLE with synthetic data and evaluated its performance in terms of its rate of convergence to the correct molecular kinetic scheme and its robustness to noise. We then examined its performance on real experimental traces by analyzing macroscopic calcium-current traces elicited by membrane depolarization. SYSMOLE derived the correct, previously known molecular kinetic scheme describing the activation and inactivation of the underlying calcium channels and correctly identified the accepted mechanism of action of nifedipine, a calcium-channel blocker clinically used in patients with cardiovascular disease. Finally, we applied SYSMOLE to study the pharmacology of a new class of glutamate antipsychotic drugs and their crosstalk mechanism through a heteromeric complex of G protein-coupled receptors. Our results indicate that our methodology can be successfully applied to accurately derive molecular kinetic schemes from experimental macroscopic traces, and we anticipate that it may be useful in the study of a wide variety of biological systems. PMID:28192423

  2. Analytic Solution to the Problem of Aircraft Electric Field Mill Calibration

    NASA Technical Reports Server (NTRS)

    Koshak, William

    2003-01-01

    It is by no means a simple task to retrieve storm electric fields from an aircraft instrumented with electric field mill sensors. The presence of the aircraft distorts the ambient field in a complicated way. Before retrievals of the storm field can be made, the field mill measurement system must be "calibrated". In other words, a relationship between impressed (i.e., ambient) electric field and mill output must be established. If this relationship can be determined, it is mathematically inverted so that ambient field can be inferred from the mill outputs. Previous studies have primarily focused on linear theories where the relationship between ambient field and mill output is described by a "calibration matrix" M. Each element of the matrix describes how a particular component of the ambient field is enhanced by the aircraft. For example the product M(sub ix), E(sub x), is the contribution of the E(sub x) field to the i(th) mill output. Similarly, net aircraft charge (described by a "charge field component" E(sub q)) contributes an amount M(sub iq)E(sub q) to the output of the i(th) sensor. The central difficulty in obtaining M stems from the fact that the impressed field (E(sub x), E(sub y), E(sub z), E(sub q) is not known but is instead estimated. Typically, the aircraft is flown through a series of roll and pitch maneuvers in fair weather, and the values of the fair weather field and aircraft charge are estimated at each point along the aircraft trajectory. These initial estimates are often highly inadequate, but several investigators have improved the estimates by implementing various (ad hoc) iterative methods. Unfortunately, none of the iterative methods guarantee absolute convergence to correct values (i.e., absolute convergence to correct values has not been rigorously proven). In this work, the mathematical problem is solved directly by analytic means. For m mills installed on an arbitrary aircraft, it is shown that it is possible to solve for a single 2m-vector that provides all other needed variables (i.e., the unknown fair weather field, the unknown aircraft charge, and the unknown matrix M). Numerical tests of the solution, effects of measurement errors, and studies of solution non-uniqueness are ongoing as of this writing.

  3. 42 CFR 493.1250 - Condition: Analytic systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Condition: Analytic systems. 493.1250 Section 493.1250 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES... correct identified problems as specified in § 493.1289 for each specialty and subspecialty of testing...

  4. Compensating for Effects of Humidity on Electronic Noses

    NASA Technical Reports Server (NTRS)

    Homer, Margie; Ryan, Margaret A.; Manatt, Kenneth; Zhou, Hanying; Manfreda, Allison

    2004-01-01

    A method of compensating for the effects of humidity on the readouts of electronic noses has been devised and tested. The method is especially appropriate for use in environments in which humidity is not or cannot be controlled for example, in the vicinity of a chemical spill, which can be accompanied by large local changes in humidity. Heretofore, it has been common practice to treat water vapor as merely another analyte, the concentration of which is determined, along with that of the other analytes, in a computational process based on deconvolution. This practice works well, but leaves room for improvement: changes in humidity can give rise to large changes in electronic-nose responses. If corrections for humidity are not made, the large humidity-induced responses may swamp smaller responses associated with low concentrations of analytes. The present method offers an improvement. The underlying concept is simple: One augments an electronic nose with a separate humidity and a separate temperature sensor. The outputs of the humidity and temperature sensors are used to generate values that are subtracted from the readings of the other sensors in an electronic nose to correct for the temperature-dependent contributions of humidity to those readings. Hence, in principle, what remains after corrections are the contributions of the analytes only. Laboratory experiments on a first-generation electronic nose have shown that this method is effective and improves the success rate of identification of analyte/ water mixtures. Work on a second-generation device was in progress at the time of reporting the information for this article.

  5. Teaching dermatoscopy of pigmented skin tumours to novices: comparison of analytic vs. heuristic approach.

    PubMed

    Tschandl, P; Kittler, H; Schmid, K; Zalaudek, I; Argenziano, G

    2015-06-01

    There are two strategies to approach the dermatoscopic diagnosis of pigmented skin tumours, namely the verbal-based analytic and the more visual-global heuristic method. It is not known if one or the other is more efficient in teaching dermatoscopy. To compare two teaching methods in short-term training of dermatoscopy to medical students. Fifty-seven medical students in the last year of the curriculum were given a 1-h lecture of either the heuristic- or the analytic-based teaching of dermatoscopy. Before and after this session, they were shown the same 50 lesions and asked to diagnose them and rate for chance of malignancy. Test lesions consisted of melanomas, basal cell carcinomas, nevi, seborrhoeic keratoses, benign vascular tumours and dermatofibromas. Performance measures were diagnostic accuracy regarding malignancy as measured by the area under the curves of receiver operating curves (range: 0-1), as well as per cent correct diagnoses (range: 0-100%). Diagnostic accuracy as well as per cent correct diagnoses increased by +0.21 and +32.9% (heuristic teaching) and +0.19 and +35.7% (analytic teaching) respectively (P for all <0.001). Neither for diagnostic accuracy (P = 0.585), nor for per cent correct diagnoses (P = 0.298) was a difference between the two groups. Short-term training of dermatoscopy to medical students allows significant improvement in diagnostic abilities. Choosing a heuristic or analytic method does not have an influence on this effect in short training using common pigmented skin lesions. © 2014 European Academy of Dermatology and Venereology.

  6. Does Couple and Relationship Education Work for Individuals in Stepfamilies? A Meta-Analytic Study

    ERIC Educational Resources Information Center

    Lucier-Greer, Mallory; Adler-Baeder, Francesca

    2012-01-01

    Recent meta-analytic efforts have documented how couple and relationship education (CRE) programs promote healthy relationship and family functioning. The current meta-analysis contributes to this body of literature by examining stepfamily couples, an at-risk, subpopulation of participants, and assessing the effectiveness of CRE programs for…

  7. Validation of an isotope dilution, ICP-MS method based on internal mass bias correction for the determination of trace concentrations of Hg in sediment cores.

    PubMed

    Ciceri, E; Recchia, S; Dossi, C; Yang, L; Sturgeon, R E

    2008-01-15

    The development and validation of a method for the determination of mercury in sediments using a sector field inductively coupled plasma mass spectrometer (SF-ICP-MS) for detection is described. The utilization of isotope dilution (ID) calibration is shown to solve analytical problems related to matrix composition. Mass bias is corrected using an internal mass bias correction technique, validated against the traditional standard bracketing method. The overall analytical protocol is validated against NRCC PACS-2 marine sediment CRM. The estimated limit of detection is 12ng/g. The proposed procedure was applied to the analysis of a real sediment core sampled to a depth of 160m in Lake Como, where Hg concentrations ranged from 66 to 750ng/g.

  8. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  9. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  10. Overcoming Intuition: Metacognitive Difficulty Activates Analytic Reasoning

    ERIC Educational Resources Information Center

    Alter, Adam L.; Oppenheimer, Daniel M.; Epley, Nicholas; Eyre, Rebecca N.

    2007-01-01

    Humans appear to reason using two processing styles: System 1 processes that are quick, intuitive, and effortless and System 2 processes that are slow, analytical, and deliberate that occasionally correct the output of System 1. Four experiments suggest that System 2 processes are activated by metacognitive experiences of difficulty or disfluency…

  11. The Use of Meta-Analytic Statistical Significance Testing

    ERIC Educational Resources Information Center

    Polanin, Joshua R.; Pigott, Terri D.

    2015-01-01

    Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…

  12. Fast analytical scatter estimation using graphics processing units.

    PubMed

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  13. The Relationship between SW-846, PBMS, and Innovative Analytical Technologies

    EPA Pesticide Factsheets

    This paper explains EPA's position regarding testing methods used within waste programs, documentation of EPA's position, the reasoning behind EPA's position, and the relationship between analytical method regulatory flexibility and the use of on-site...

  14. Choice of 17O Abundance Correction Affects Δ47 and Thus Calibrations for Paleothermometry

    NASA Astrophysics Data System (ADS)

    Kelson, J.; Schauer, A. J.; Huntington, K. W.; Saenger, C.; Lechler, A. R.

    2016-12-01

    The clumped isotope composition of CO2 derived from carbonate (Δ47) varies with temperature, making it a valuable geothermometer with broad applications. However, its accuracy is limited by inter-laboratory discrepancies of carbonate reference materials and disagreement among Δ47-temperature calibrations. Here we use a suite of CO2-H2O equilibrations at known temperatures with a wide range in 13C and 18O compositions to show how the correction for the abundance of 17O impacts Δ47 and potentially explains these discrepancies. When a traditional value of 0.5164 is used for the fractionation between 17O and 18O (λ), corrected Δ47 in 23 °C CO2-H2O equilibrations exhibits a dependence on 13C composition that is equivalent to 20 ºC (Δ47 range of 0.06 ‰). In contrast, these discrepancies are effectively removed when λ=0.528, as in global meteoric waters. Furthermore, carbonate standards with identical formation temperatures have significantly different Δ47 when corrected using λ=0.5164, but agree within error when λ=0.528. The choice of λ affects the accuracy of all sample Δ47 values, unless the sample CO2, mass spectrometer reference gas, and equilibrated gases share the same 13C composition. The sensitivity of Δ47 to the choice of λ, and the apparent dependence on 13C when 0.5164 is used, are relevant to the abiogenic experiments used in Δ47-temperature calibrations given that precipitation methods involving CO2 bubbling produce carbonates depleted in 13C by tens of permil relative to methods that mix salts. We evaluate the influence of 17O correction on Δ47-temperature calibrations using a suite of 58 abiogenic carbonates precipitated at 4-85 ºC using CO2 bubbling and the mixing of salts. Aliquots of precipitated carbonates were digested at 25 and 90ºC, but all other preparatory and analytical variables were held constant. When corrected using λ=0.5164, various precipitation methods yield sub-parallel Δ47-temperature relationships with slopes of 0.034-0.044 (x 106/T2), but offset intercepts. Conversely, Δ47-temperature relationships overlap within error when λ=0.528. This suggests that the method used to correct for 17O abundance likely contributes to observed calibration discrepancies and that adopting λ=0.528 may reduce the uncertainty in Δ47 temperature reconstructions.

  15. Learning About Love: A Meta-Analytic Study of Individually-Oriented Relationship Education Programs for Adolescents and Emerging Adults.

    PubMed

    Simpson, David M; Leonhardt, Nathan D; Hawkins, Alan J

    2018-03-01

    Despite recent policy initiatives and substantial federal funding of individually oriented relationship education programs for youth, there have been no meta-analytic reviews of this growing field. This meta-analytic study draws on 17 control-group studies and 13 one-group/pre-post studies to evaluate the effectiveness of relationship education programs on adolescents' and emerging adults' relationship knowledge, attitudes, and skills. Overall, control-group studies produced a medium effect (d = .36); one-group/pre-post studies also produced a medium effect (d = .47). However, the lack of studies with long-term follow-ups of relationship behaviors in the young adult years is a serious weakness in the field, limiting what we can say about the value of these programs for helping youth achieve their aspirations for healthy romantic relationships and stable marriages.

  16. On the Application of Euler Deconvolution to the Analytic Signal

    NASA Astrophysics Data System (ADS)

    Fedi, M.; Florio, G.; Pasteka, R.

    2005-05-01

    In the last years papers on Euler deconvolution (ED) used formulations that accounted for the unknown background field, allowing to consider the structural index (N) an unknown to be solved for, together with the source coordinates. Among them, Hsu (2002) and Fedi and Florio (2002) independently pointed out that the use of an adequate m-order derivative of the field, instead than the field itself, allowed solving for both N and source position. For the same reason, Keating and Pilkington (2004) proposed the ED of the analytic signal. A function being analyzed by ED must be homogeneous but also harmonic, because it must be possible to compute its vertical derivative, as well known from potential field theory. Huang et al. (1995), demonstrated that analytic signal is a homogeneous function, but, for instance, it is rather obvious that the magnetic field modulus (corresponding to the analytic signal of a gravity field) is not a harmonic function (e.g.: Grant & West, 1965). Thus, it appears that a straightforward application of ED to the analytic signal is not possible because a vertical derivation of this function is not correct by using standard potential fields analysis tools. In this note we want to theoretically and empirically check what kind of error are caused in the ED by such wrong assumption about analytic signal harmonicity. We will discuss results on profile and map synthetic data, and use a simple method to compute the vertical derivative of non-harmonic functions measured on a horizontal plane. Our main conclusions are: 1. To approximate a correct evaluation of the vertical derivative of a non-harmonic function it is useful to compute it with finite-difference, by using upward continuation. 2. We found that the errors on the vertical derivative computed as if the analytic signal was harmonic reflects mainly on the structural index estimate; these errors can mislead an interpretation even though the depth estimates are almost correct. 3. Consistent estimates of depth and S.I. are instead obtained by using a finite-difference vertical derivative of the analytic signal. 4. Analysis of a case history confirms the strong error in the estimation of structural index if the analytic signal is treated as an harmonic function.

  17. A simple method for estimating frequency response corrections for eddy covariance systems

    Treesearch

    W. J. Massman

    2000-01-01

    A simple analytical formula is developed for estimating the frequency attenuation of eddy covariance fluxes due to sensor response, path-length averaging, sensor separation, signal processing, and flux averaging periods. Although it is an approximation based on flat terrain cospectra, this analytical formula should have broader applicability than just flat-terrain...

  18. Evidence for Holistic Representations of Ignored Images and Analytic Representations of Attended Images

    ERIC Educational Resources Information Center

    Thoma, Volker; Hummel, John E.; Davidoff, Jules

    2004-01-01

    According to the hybrid theory of object recognition (J. E. Hummel, 2001), ignored object images are represented holistically, and attended images are represented both holistically and analytically. This account correctly predicts patterns of visual priming as a function of translation, scale (B. J. Stankiewicz & J. E. Hummel, 2002), and…

  19. Analytic Studies of the Relationship between Track Geometry Variations and Derailment Potential at Low Speeds

    DOT National Transportation Integrated Search

    1983-09-01

    This report describes analytical studies carried out to define the relationship between track parameters and safety from derailment. Problematic track scenarios are identified reflecting known accident data. Vehicle response is investigated in the 10...

  20. Analytical Studies of the Relationship Between Track Geometry Variations and Derailment Potential at Low Speeds

    DOT National Transportation Integrated Search

    1983-09-01

    This report describes analytical studies carried out to define the relationship between track parameters and safety from derailment. Problematic track scenarios are identified reflecting known accident data. Vehicle response is investigated in the 10...

  1. Analyzing chromatographic data using multilevel modeling.

    PubMed

    Wiczling, Paweł

    2018-06-01

    It is relatively easy to collect chromatographic measurements for a large number of analytes, especially with gradient chromatographic methods coupled with mass spectrometry detection. Such data often have a hierarchical or clustered structure. For example, analytes with similar hydrophobicity and dissociation constant tend to be more alike in their retention than a randomly chosen set of analytes. Multilevel models recognize the existence of such data structures by assigning a model for each parameter, with its parameters also estimated from data. In this work, a multilevel model is proposed to describe retention time data obtained from a series of wide linear organic modifier gradients of different gradient duration and different mobile phase pH for a large set of acids and bases. The multilevel model consists of (1) the same deterministic equation describing the relationship between retention time and analyte-specific and instrument-specific parameters, (2) covariance relationships relating various physicochemical properties of the analyte to chromatographically specific parameters through quantitative structure-retention relationship based equations, and (3) stochastic components of intra-analyte and interanalyte variability. The model was implemented in Stan, which provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods. Graphical abstract Relationships between log k and MeOH content for acidic, basic, and neutral compounds with different log P. CI credible interval, PSA polar surface area.

  2. Evaluation of the potential use of hybrid LC-MS/MS for active drug quantification applying the 'free analyte QC concept'.

    PubMed

    Jordan, Gregor; Onami, Ichio; Heinrich, Julia; Staack, Roland F

    2017-11-01

    Assessment of active drug exposure of biologics may be crucial for drug development. Typically, ligand-binding assay methods are used to provide free/active drug concentrations. To what extent hybrid LC-MS/MS procedures enable correct 'active' drug quantification is currently under consideration. Experimental & results: The relevance of appropriate extraction condition was evaluated by a hybrid target capture immuno-affinity LC-MS/MS method using total and free/active quality controls (QCs). The rapid extraction (10 min) provided correct results, whereas overnight incubation resulted in significant overestimation of the free/active drug (monclonal antibody) concentration. Conventional total QCs were inappropriate to determine optimal method conditions in contrast to free/active QCs. The 'free/active analyte QC concept' enables development of appropriate extraction conditions for correct active drug quantification by hybrid LC-MS/MS.

  3. Statistical properties of single-mode fiber coupling of satellite-to-ground laser links partially corrected by adaptive optics.

    PubMed

    Canuet, Lucien; Védrenne, Nicolas; Conan, Jean-Marc; Petit, Cyril; Artaud, Geraldine; Rissons, Angelique; Lacan, Jerome

    2018-01-01

    In the framework of satellite-to-ground laser downlinks, an analytical model describing the variations of the instantaneous coupled flux into a single-mode fiber after correction of the incoming wavefront by partial adaptive optics (AO) is presented. Expressions for the probability density function and the cumulative distribution function as well as for the average fading duration and fading duration distribution of the corrected coupled flux are given. These results are of prime interest for the computation of metrics related to coded transmissions over correlated channels, and they are confronted by end-to-end wave-optics simulations in the case of a geosynchronous satellite (GEO)-to-ground and a low earth orbit satellite (LEO)-to-ground scenario. Eventually, the impact of different AO performances on the aforementioned fading duration distribution is analytically investigated for both scenarios.

  4. TU-F-17A-03: An Analytical Respiratory Perturbation Model for Lung Motion Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G; Yuan, A; Wei, J

    2014-06-15

    Purpose: Breathing irregularity is common, causing unreliable prediction in tumor motion for correlation-based surrogates. Both tidal volume (TV) and breathing pattern (BP=ΔVthorax/TV, where TV=ΔVthorax+ΔVabdomen) affect lung motion in anterior-posterior and superior-inferior directions. We developed a novel respiratory motion perturbation (RMP) model in analytical form to account for changes in TV and BP in motion prediction from simulation to treatment. Methods: The RMP model is an analytical function of patient-specific anatomic and physiologic parameters. It contains a base-motion trajectory d(x,y,z) derived from a 4-dimensional computed tomography (4DCT) at simulation and a perturbation term Δd(ΔTV,ΔBP) accounting for deviation at treatment from simulation.more » The perturbation is dependent on tumor-specific location and patient-specific anatomy. Eleven patients with simulation and treatment 4DCT images were used to assess the RMP method in motion prediction from 4DCT1 to 4DCT2, and vice versa. For each patient, ten motion trajectories of corresponding points in the lower lobes were measured in both 4DCTs: one served as the base-motion trajectory and the other as the ground truth for comparison. In total, 220 motion trajectory predictions were assessed. The motion discrepancy between two 4DCTs for each patient served as a control. An established 5D motion model was used for comparison. Results: The average absolute error of RMP model prediction in superior-inferior direction is 1.6±1.8 mm, similar to 1.7±1.6 mm from the 5D model (p=0.98). Some uncertainty is associated with limited spatial resolution (2.5mm slice thickness) and temporal resolution (10-phases). Non-corrected motion discrepancy between two 4DCTs is 2.6±2.7mm, with the maximum of ±20mm, and correction is necessary (p=0.01). Conclusion: The analytical motion model predicts lung motion with accuracy similar to the 5D model. The analytical model is based on physical relationships, requires no training, and therefore is potentially more resilient to breathing irregularities. On-going investigation introduces airflow into the RMP model for improvement. This research is in part supported by NIH (U54CA137788/132378). AY would like to thank MSKCC summer medical student research program supported by National Cancer Institute and hosted by Department of Medical Physics at MSKCC.« less

  5. Analytical treatment of self-phase-modulation beyond the slowly varying envelope approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syrchin, M.S.; Zheltikov, A.M.; International Laser Center, M.V. Lomonosov Moscow State University, 119899 Moscow

    Analytical treatment of the self-phase-modulation of an ultrashort light pulse is extended beyond the slowly varying envelope approximation. The resulting wave equation is modified to include corrections to self-phase-modulation due to higher-order spatial and temporal derivatives. Analytical solutions are found in the limiting regimes of high nonlinearities and very short pulses. Our results reveal features that can significantly impact both pulse shape and the evolution of the phase.

  6. Second-order electron self-energy loop-after-loop correction for low- Z hydrogen-like ions

    NASA Astrophysics Data System (ADS)

    Goidenko, Igor; Labzowsky, Leonti; Plunien, Günter; Soff, Gerhard

    2005-07-01

    The second-order electron self-energy loop-after-loop correction is investigated for hydrogen-like ions in the region of low nuclear charge numbers Z. Both irreducible and reducible parts of this correction are evaluated for the 1s1/2-state within the Fried-Yennie gauge. We confirm the result obtained first by Mallampalli and Sapirstein. The reducible part of this correction is evaluated numerically for the first time and it is consistent with the corresponding analytical αZ-expansion.

  7. Analytic Scattering and Refraction Models for Exoplanet Transit Spectra

    NASA Astrophysics Data System (ADS)

    Robinson, Tyler D.; Fortney, Jonathan J.; Hubbard, William B.

    2017-12-01

    Observations of exoplanet transit spectra are essential to understanding the physics and chemistry of distant worlds. The effects of opacity sources and many physical processes combine to set the shape of a transit spectrum. Two such key processes—refraction and cloud and/or haze forward-scattering—have seen substantial recent study. However, models of these processes are typically complex, which prevents their incorporation into observational analyses and standard transit spectrum tools. In this work, we develop analytic expressions that allow for the efficient parameterization of forward-scattering and refraction effects in transit spectra. We derive an effective slant optical depth that includes a correction for forward-scattered light, and present an analytic form of this correction. We validate our correction against a full-physics transit spectrum model that includes scattering, and we explore the extent to which the omission of forward-scattering effects may bias models. Also, we verify a common analytic expression for the location of a refractive boundary, which we express in terms of the maximum pressure probed in a transit spectrum. This expression is designed to be easily incorporated into existing tools, and we discuss how the detection of a refractive boundary could help indicate the background atmospheric composition by constraining the bulk refractivity of the atmosphere. Finally, we show that opacity from Rayleigh scattering and collision-induced absorption will outweigh the effects of refraction for Jupiter-like atmospheres whose equilibrium temperatures are above 400-500 K.

  8. Visual analytics for aviation safety: A collaborative approach to sensemaking

    NASA Astrophysics Data System (ADS)

    Wade, Andrew

    Visual analytics, the "science of analytical reasoning facilitated by interactive visual interfaces", is more than just visualization. Understanding the human reasoning process is essential for designing effective visualization tools and providing correct analyses. This thesis describes the evolution, application and evaluation of a new method for studying analytical reasoning that we have labeled paired analysis. Paired analysis combines subject matter experts (SMEs) and tool experts (TE) in an analytic dyad, here used to investigate aircraft maintenance and safety data. The method was developed and evaluated using interviews, pilot studies and analytic sessions during an internship at the Boeing Company. By enabling a collaborative approach to sensemaking that can be captured by researchers, paired analysis yielded rich data on human analytical reasoning that can be used to support analytic tool development and analyst training. Keywords: visual analytics, paired analysis, sensemaking, boeing, collaborative analysis.

  9. Relationship between Counseling Students' Childhood Memories and Current Negative Self-Evaluations When Receiving Corrective Feedback

    ERIC Educational Resources Information Center

    Stroud, Daniel; Olguin, David; Marley, Scott

    2016-01-01

    This article entails a study focused on the relationship between counseling students' negative childhood memories of receiving corrective feedback and current negative self-evaluations when receiving similar feedback in counselor education programs. Participants (N = 186) completed the Corrective Feedback Instrument-Revised (CFI-R; Hulse-Killacky…

  10. Using water raman intensity to determine the effective excitation and emission path lengths of fluorophotometers for correcting fluorescence inner filter effect

    USDA-ARS?s Scientific Manuscript database

    Fluorescence and Raman inner filter effects (IFE) cause spectral distortion and nonlinearity between spectral signal intensity with increasing analyte concentration. Convenient and effective correction of fluorescence IFE has been an active research goal for decades. Presented herein is the finding ...

  11. 77 FR 45589 - Initiation of Five-Year (“Sunset”) Review and Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-01

    ...) and 70 FR 62061 (October 28, 2005). Guidance on methodological or analytical issues relevant to the... Modification, 77 FR 8101 (February 14, 2012). Correction of Case Number From Previous Sunset Review Initiation... Department case number for the antidumping duty order on steel concrete reinforcing bars from Latvia. The...

  12. Improved multidimensional semiclassical tunneling theory.

    PubMed

    Wagner, Albert F

    2013-12-12

    We show that the analytic multidimensional semiclassical tunneling formula of Miller et al. [Miller, W. H.; Hernandez, R.; Handy, N. C.; Jayatilaka, D.; Willets, A. Chem. Phys. Lett. 1990, 172, 62] is qualitatively incorrect for deep tunneling at energies well below the top of the barrier. The origin of this deficiency is that the formula uses an effective barrier weakly related to the true energetics but correctly adjusted to reproduce the harmonic description and anharmonic corrections of the reaction path at the saddle point as determined by second order vibrational perturbation theory. We present an analytic improved semiclassical formula that correctly includes energetic information and allows a qualitatively correct representation of deep tunneling. This is done by constructing a three segment composite Eckart potential that is continuous everywhere in both value and derivative. This composite potential has an analytic barrier penetration integral from which the semiclassical action can be derived and then used to define the semiclassical tunneling probability. The middle segment of the composite potential by itself is superior to the original formula of Miller et al. because it incorporates the asymmetry of the reaction barrier produced by the known reaction exoergicity. Comparison of the semiclassical and exact quantum tunneling probability for the pure Eckart potential suggests a simple threshold multiplicative factor to the improved formula to account for quantum effects very near threshold not represented by semiclassical theory. The deep tunneling limitations of the original formula are echoed in semiclassical high-energy descriptions of bound vibrational states perpendicular to the reaction path at the saddle point. However, typically ab initio energetic information is not available to correct it. The Supporting Information contains a Fortran code, test input, and test output that implements the improved semiclassical tunneling formula.

  13. GEOS satellite tracking corrections for refraction in the ionosphere

    NASA Technical Reports Server (NTRS)

    Berbert, J. H.; Parker, H. C.

    1970-01-01

    The analytic formulations at different elevation angles and at a frequency of 2-GHz for the ionospheric refraction corrections used on the GEOS satellite tracking data are compared. The formulas and ray trace results for elevations greater than 10 deg, where most satellite tracking is done, differ in elevation, range, and range rate by less than 0.4 millidegrees (1.4 arc-seconds), 12 meters, and 12 cm/sec, respectively. In comparison to most operational requirements, this is insignificant. However, for the GEOS Observation Systems Intercomparison Investigation, these differences are equivalent in size to observed differences in system biases for some of the best electronic geodetic tracking systems and are probably contributing to the observed biases. The ray trace results and most of the more detailed analytic correction formulas show that the ionospheric refraction correction for range rate on an overhead pass is a maximum for elevation angles between 15 deg and 30 deg and falls off rapidly for both higher and lower elevation angles, contrary to the effect of the troposphere and to some reports in the literature.

  14. Uncertainty evaluation of mass values determined by electronic balances in analytical chemistry: a new method to correct for air buoyancy.

    PubMed

    Wunderli, S; Fortunato, G; Reichmuth, A; Richard, Ph

    2003-06-01

    A new method to correct for the largest systematic influence in mass determination-air buoyancy-is outlined. A full description of the most relevant influence parameters is given and the combined measurement uncertainty is evaluated according to the ISO-GUM approach [1]. A new correction method for air buoyancy using an artefact is presented. This method has the advantage that only a mass artefact is used to correct for air buoyancy. The classical approach demands the determination of the air density and therefore suitable equipment to measure at least the air temperature, the air pressure and the relative air humidity within the demanded uncertainties (i.e. three independent measurement tasks have to be performed simultaneously). The calculated uncertainty is lower for the classical method. However a field laboratory may not always be in possession of fully traceable measurement systems for these room climatic parameters.A comparison of three approaches applied to the calculation of the combined uncertainty of mass values is presented. Namely the classical determination of air buoyancy, the artefact method, and the neglecting of this systematic effect as proposed in the new EURACHEM/CITAC guide [2]. The artefact method is suitable for high-precision measurement in analytical chemistry and especially for the production of certified reference materials, reference values and analytical chemical reference materials. The method could also be used either for volume determination of solids or for air density measurement by an independent method.

  15. The Effect of CO 2 on the Measurement of 220Rn and 222Rn with Instruments Utilising Electrostatic Precipitation

    DOE PAGES

    Lane-Smith, Derek; Sims, Kenneth

    2013-06-09

    In some volcanic systems, thoron and radon activity and CO 2 flux, in soil and fumaroles, show a relationship between ( 220Rn/ 222Rn) and CO 2 efflux. It is theorized that deep, magmatic sources of gas are characterized by high 222Rn activity and high CO 2 efflux, whereas shallow sources are indicated by high 220Rn activity and relatively low CO 2 efflux. In this paper we evaluate whether the observed inverse relationship is a true geochemical signal, or potentially an analytical artifact of high CO 2 concentrations. We report results from a laboratory experiment using the RAD7 radon detector, knownmore » 222Rn (radon) and 220Rn (thorn), and a controllable percentage of CO 2 in the carrier gas. Our results show that for every percentage of CO 2, the 220Rn reading should be multiplied by 1.019, the 222Rn radon should be multiplied by 1.003 and the 220Rn/ 222Rn ratio should be multiplied by 1.016 to correct for the presence of the CO 2.« less

  16. The Effect of CO 2 on the Measurement of 220Rn and 222Rn with Instruments Utilising Electrostatic Precipitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lane-Smith, Derek; Sims, Kenneth

    In some volcanic systems, thoron and radon activity and CO 2 flux, in soil and fumaroles, show a relationship between ( 220Rn/ 222Rn) and CO 2 efflux. It is theorized that deep, magmatic sources of gas are characterized by high 222Rn activity and high CO 2 efflux, whereas shallow sources are indicated by high 220Rn activity and relatively low CO 2 efflux. In this paper we evaluate whether the observed inverse relationship is a true geochemical signal, or potentially an analytical artifact of high CO 2 concentrations. We report results from a laboratory experiment using the RAD7 radon detector, knownmore » 222Rn (radon) and 220Rn (thorn), and a controllable percentage of CO 2 in the carrier gas. Our results show that for every percentage of CO 2, the 220Rn reading should be multiplied by 1.019, the 222Rn radon should be multiplied by 1.003 and the 220Rn/ 222Rn ratio should be multiplied by 1.016 to correct for the presence of the CO 2.« less

  17. Fluence correction factors for graphite calorimetry in a low-energy clinical proton beam: I. Analytical and Monte Carlo simulations.

    PubMed

    Palmans, H; Al-Sulaiti, L; Andreo, P; Shipley, D; Lühr, A; Bassler, N; Martinkovič, J; Dobrovodský, J; Rossomme, S; Thomas, R A S; Kacperek, A

    2013-05-21

    The conversion of absorbed dose-to-graphite in a graphite phantom to absorbed dose-to-water in a water phantom is performed by water to graphite stopping power ratios. If, however, the charged particle fluence is not equal at equivalent depths in graphite and water, a fluence correction factor, kfl, is required as well. This is particularly relevant to the derivation of absorbed dose-to-water, the quantity of interest in radiotherapy, from a measurement of absorbed dose-to-graphite obtained with a graphite calorimeter. In this work, fluence correction factors for the conversion from dose-to-graphite in a graphite phantom to dose-to-water in a water phantom for 60 MeV mono-energetic protons were calculated using an analytical model and five different Monte Carlo codes (Geant4, FLUKA, MCNPX, SHIELD-HIT and McPTRAN.MEDIA). In general the fluence correction factors are found to be close to unity and the analytical and Monte Carlo codes give consistent values when considering the differences in secondary particle transport. When considering only protons the fluence correction factors are unity at the surface and increase with depth by 0.5% to 1.5% depending on the code. When the fluence of all charged particles is considered, the fluence correction factor is about 0.5% lower than unity at shallow depths predominantly due to the contributions from alpha particles and increases to values above unity near the Bragg peak. Fluence correction factors directly derived from the fluence distributions differential in energy at equivalent depths in water and graphite can be described by kfl = 0.9964 + 0.0024·zw-eq with a relative standard uncertainty of 0.2%. Fluence correction factors derived from a ratio of calculated doses at equivalent depths in water and graphite can be described by kfl = 0.9947 + 0.0024·zw-eq with a relative standard uncertainty of 0.3%. These results are of direct relevance to graphite calorimetry in low-energy protons but given that the fluence correction factor is almost solely influenced by non-elastic nuclear interactions the results are also relevant for plastic phantoms that consist of carbon, oxygen and hydrogen atoms as well as for soft tissues.

  18. A behavior-analytic critique of Bandura's self-efficacy theory

    PubMed Central

    Biglan, Anthony

    1987-01-01

    A behavior-analytic critique of self-efficacy theory is presented. Self-efficacy theory asserts that efficacy expectations determine approach behavior and physiological arousal of phobics as well as numerous other clinically important behaviors. Evidence which is purported to support this assertion is reviewed. The evidence consists of correlations between self-efficacy ratings and other behaviors. Such response-response relationships do not unequivocally establish that one response causes another. A behavior-analytic alternative to self-efficacy theory explains these relationships in terms of environmental events. Correlations between self-efficacy rating behavior and other behavior may be due to the contingencies of reinforcement that establish a correspondence between such verbal predictions and the behavior to which they refer. Such a behavior-analytic account does not deny any of the empirical relationships presented in support of self-efficacy theory, but it points to environmental variables that could account for those relationships and that could be manipulated in the interest of developing more effective treatment procedures. PMID:22477956

  19. Observational constraints on loop quantum cosmology.

    PubMed

    Bojowald, Martin; Calcagni, Gianluca; Tsujikawa, Shinji

    2011-11-18

    In the inflationary scenario of loop quantum cosmology in the presence of inverse-volume corrections, we give analytic formulas for the power spectra of scalar and tensor perturbations convenient to compare with observations. Since inverse-volume corrections can provide strong contributions to the running spectral indices, inclusion of terms higher than the second-order runnings in the power spectra is crucially important. Using the recent data of cosmic microwave background and other cosmological experiments, we place bounds on the quantum corrections.

  20. On trying something new: effort and practice in psychoanalytic change.

    PubMed

    Power, D G

    2000-07-01

    This paper describes one of the ingredients of successful psychoanalytic change: the necessity for the analysand to actively attempt altered patterns of thinking, behaving, feeling, and relating outside of the analytic relationship. When successful, such self-initiated attempts at change are founded on insight and experience gained in the transference and constitute a crucial step in the consolidation and transfer of therapeutic gains. The analytic literature related to this aspect of therapeutic action is reviewed, including the work of Freud, Bader, Rangell, Renik, Valenstein, and Wheelis. Recent interest in the complex and complementary relationship between action and increased self-understanding as it unfolds in the analytic setting is extended beyond the consulting room to include the analysand's extra-analytic attempts to initiate change. Contemporary views of the relationship between praxis and self-knowledge are discussed and offered as theoretical support for broadening analytic technique to include greater attention to the analysand's efforts at implementing therapeutic gains. Case vignettes are presented.

  1. A theoretical evaluation of rigid baffles in suppression of combustion instability

    NASA Technical Reports Server (NTRS)

    Baer, M. R.; Mitchell, C. E.

    1976-01-01

    An analytical technique for the prediction of the effects of rigid baffles on the stability of liquid propellant combustors is presented. A three dimensional combustor model characterized by a concentrated combustion source at the chamber injector and a constant Mach number nozzle is used. The linearized partial differential equations describing the unsteady flow field are solved by an eigenfunction matching method. Boundary layer corrections to this unsteady flow are used to evaluate viscous and turbulence effects within the flow. An integral stability relationship is then employed to predict the decay rate of the oscillations. Results show that sufficient dissipation exists to indicate that the proper mechanism of baffle damping is a fluid dynamic loss. The response of the dissipation model to varying baffle blade length, mean flow Mach number and oscillation amplitude is examined.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izacard, Olivier

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basismore » sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it is shown that (i) the empirical formula for the secondary electron emission is not consistent with a MDF due to the presence of super-thermal particles, (ii) the super-thermal particles can replace a diffusion parameter in the Langmuir probe current formula, and (iii) the entropy can explicitly decrease in presence of sources only for the introduced INMDF without violating the second law of thermodynamics. Moreover, the first order entropy of an infinite number of super-thermal tails stays the same as the entropy of a MDF. In conclusion, the latter demystifies the Maxwell's demon by statistically describing non-isolated systems.« less

  3. Do Cognitive Styles Affect the Performance of System Development Groups?

    DTIC Science & Technology

    1986-03-21

    that a person is classified as one of 16 possible types: ISTJ, iSFJ, INFJ, INTJ, ISTP, INFP, ISFP, INTP , ESTP, ESFP, ENFP, ENTP, ESTJ, ESFJ, ENFJ , or...development groups and the relationship between these differences and system success or failure. Chapter II will discuss some different theories of cognitive...reasoning termed analytic and hueristic. Analytic individuals reduce problems to a set of underlying relationships . These relationships , frequently

  4. Why Do You Believe in God? Relationships between Religious Belief, Analytic Thinking, Mentalizing and Moral Concern

    PubMed Central

    Jack, Anthony Ian; Friedman, Jared Parker; Boyatzis, Richard Eleftherios; Taylor, Scott Nolan

    2016-01-01

    Prior work has established that analytic thinking is associated with disbelief in God, whereas religious and spiritual beliefs have been positively linked to social and emotional cognition. However, social and emotional cognition can be subdivided into a number of distinct dimensions, and some work suggests that analytic thinking is in tension with some aspects of social-emotional cognition. This leaves open two questions. First, is belief linked to social and emotional cognition in general, or a specific dimension in particular? Second, does the negative relationship between belief and analytic thinking still hold after relationships with social and emotional cognition are taken into account? We report eight hypothesis-driven studies which examine these questions. These studies are guided by a theoretical model which focuses on the distinct social and emotional processing deficits associated with autism spectrum disorders (mentalizing) and psychopathy (moral concern). To our knowledge no other study has investigated both of these dimensions of social and emotion cognition alongside analytic thinking. We find that religious belief is robustly positively associated with moral concern (4 measures), and that at least part of the negative association between belief and analytic thinking (2 measures) can be explained by a negative correlation between moral concern and analytic thinking. Using nine different measures of mentalizing, we found no evidence of a relationship between mentalizing and religious or spiritual belief. These findings challenge the theoretical view that religious and spiritual beliefs are linked to the perception of agency, and suggest that gender differences in religious belief can be explained by differences in moral concern. These findings are consistent with the opposing domains hypothesis, according to which brain areas associated with moral concern and analytic thinking are in tension. PMID:27008093

  5. Why Do You Believe in God? Relationships between Religious Belief, Analytic Thinking, Mentalizing and Moral Concern.

    PubMed

    Jack, Anthony Ian; Friedman, Jared Parker; Boyatzis, Richard Eleftherios; Taylor, Scott Nolan

    2016-01-01

    Prior work has established that analytic thinking is associated with disbelief in God, whereas religious and spiritual beliefs have been positively linked to social and emotional cognition. However, social and emotional cognition can be subdivided into a number of distinct dimensions, and some work suggests that analytic thinking is in tension with some aspects of social-emotional cognition. This leaves open two questions. First, is belief linked to social and emotional cognition in general, or a specific dimension in particular? Second, does the negative relationship between belief and analytic thinking still hold after relationships with social and emotional cognition are taken into account? We report eight hypothesis-driven studies which examine these questions. These studies are guided by a theoretical model which focuses on the distinct social and emotional processing deficits associated with autism spectrum disorders (mentalizing) and psychopathy (moral concern). To our knowledge no other study has investigated both of these dimensions of social and emotion cognition alongside analytic thinking. We find that religious belief is robustly positively associated with moral concern (4 measures), and that at least part of the negative association between belief and analytic thinking (2 measures) can be explained by a negative correlation between moral concern and analytic thinking. Using nine different measures of mentalizing, we found no evidence of a relationship between mentalizing and religious or spiritual belief. These findings challenge the theoretical view that religious and spiritual beliefs are linked to the perception of agency, and suggest that gender differences in religious belief can be explained by differences in moral concern. These findings are consistent with the opposing domains hypothesis, according to which brain areas associated with moral concern and analytic thinking are in tension.

  6. School Behind Bars--A Descriptive Overview of Correctional Education in the American Prison System.

    ERIC Educational Resources Information Center

    Syracuse Univ. Research Corp., NY. Policy Inst.

    This report, intended to be a descriptive yet analytical overview of correctional education programs, is organized into six chapters. Chapter one discusses the philosophical aspects (pro and con) of prisoner education. Chapter two traces the history of prisoner education from the roots of its beginning to the present. Chapter three presents the…

  7. National Forum on Building Relationships for Educational Excellence in Corrections Proceedings (Crystal City, Virginia, October 22-23, 1984).

    ERIC Educational Resources Information Center

    Office of Vocational and Adult Education (ED), Washington, DC.

    This document contains the proceedings of an annual conference of corrections officials who gathered in order to build relationships for improving correctional education. Papers in the document include the following: (1) opening general session remarks and conference goals by John K. Wu, Deputy Assistant Secretary, Office of Vocational and Adult…

  8. Quantum corrections to quasi-periodic solution of Sine-Gordon model and periodic solution of phi4 model

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, G.; Leble, S.

    2014-03-01

    Analytical form of quantum corrections to quasi-periodic solution of Sine-Gordon model and periodic solution of phi4 model is obtained through zeta function regularisation with account of all rest variables of a d-dimensional theory. Qualitative dependence of quantum corrections on parameters of the classical systems is also evaluated for a much broader class of potentials u(x) = b2f(bx) + C with b and C as arbitrary real constants.

  9. Relationship of forces acting on implant rods and degree of scoliosis correction.

    PubMed

    Salmingo, Remel Alingalan; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu

    2013-02-01

    Adolescent idiopathic scoliosis is a complex spinal pathology characterized as a three-dimensional spine deformity combined with vertebral rotation. Various surgical techniques for correction of severe scoliotic deformity have evolved and became more advanced in applying the corrective forces. The objective of this study was to investigate the relationship between corrective forces acting on deformed rods and degree of scoliosis correction. Implant rod geometries of six adolescent idiopathic scoliosis patients were measured before and after surgery. An elasto-plastic finite element model of the implant rod before surgery was reconstructed for each patient. An inverse method based on Finite Element Analysis was used to apply forces to the implant rod model such that it was deformed the same after surgery. Relationship between the magnitude of corrective forces and degree of correction expressed as change of Cobb angle was evaluated. The effects of screw configuration on the corrective forces were also investigated. Corrective forces acting on rods and degree of correction were not correlated. Increase in number of implant screws tended to decrease the magnitude of corrective forces but did not provide higher degree of correction. Although greater correction was achieved with higher screw density, the forces increased at some level. The biomechanics of scoliosis correction is not only dependent to the corrective forces acting on implant rods but also associated with various parameters such as screw placement configuration and spine stiffness. Considering the magnitude of forces, increasing screw density is not guaranteed as the safest surgical strategy. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Coherent control of molecular alignment of homonuclear diatomic molecules by analytically designed laser pulses.

    PubMed

    Zou, Shiyang; Sanz, Cristina; Balint-Kurti, Gabriel G

    2008-09-28

    We present an analytic scheme for designing laser pulses to manipulate the field-free molecular alignment of a homonuclear diatomic molecule. The scheme is based on the use of a generalized pulse-area theorem and makes use of pulses constructed around two-photon resonant frequencies. In the proposed scheme, the populations and relative phases of the rovibrational states of the molecule are independently controlled utilizing changes in the laser intensity and in the carrier-envelope phase difference, respectively. This allows us to create the correct coherent superposition of rovibrational states needed to achieve optimal molecular alignment. The validity and efficiency of the scheme are demonstrated by explicit application to the H(2) molecule. The analytically designed laser pulses are tested by exact numerical solutions of the time-dependent Schrodinger equation including laser-molecule interactions to all orders of the field strength. The design of a sequence of pulses to further enhance molecular alignment is also discussed and tested. It is found that the rotating wave approximation used in the analytic design of the laser pulses leads to small errors in the prediction of the relative phase of the rotational states. It is further shown how these errors may be easily corrected.

  11. Comparison of procedures for correction of matrix interferences in the analysis of soils by ICP-OES with CCD detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadler, D.A.; Sun, F.; Littlejohn, D.

    1995-12-31

    ICP-OES is a useful technique for multi-element analysis of soils. However, as a number of elements are present in relatively high concentrations, matrix interferences can occur and examples have been widely reported. The availability of CCD detectors has increased the opportunities for rapid multi-element, multi-wave-length determination of elemental concentrations in soils and other environmental samples. As the composition of soils from industrial sites can vary considerably, especially when taken from different pit horizons, procedures are required to assess the extent of interferences and correct the effects, on a simultaneous multi-element basis. In single element analysis, plasma operating conditions can sometimesmore » be varied to minimize or even remove multiplicative interferences. In simultaneous multi-element analysis, the scope for this approach may be limited, depending on the spectrochemical characteristics of the emitting analyte species. Matrix matching, by addition of major sample components to the analyte calibrant solutions, can be used to minimize inaccuracies. However, there are also limitations to this procedure, when the sample composition varies significantly. Multiplicative interference effects can also be assessed by a {open_quotes}single standard addition{close_quotes} of each analyte to the sample solution and the information obtained may be used to correct the analyte concentrations determined directly. Each of these approaches has been evaluated to ascertain the best procedure for multi-element analysis of industrial soils by ICP-OES with CCD detection at multiple wavelengths. Standard reference materials and field samples have been analyzed to illustrate the efficacy of each procedure.« less

  12. Analytic theory for the selection of 2-D needle crystal at arbitrary Peclet number

    NASA Technical Reports Server (NTRS)

    Tanveer, Saleh

    1989-01-01

    An accurate analytic theory is presented for the velocity selection of a two-dimensional needle crystal for arbitrary Peclet number for small values of the surface tension parameter. The velocity selection is caused by the effect of transcendentally small terms which are determined by analytic continuation to the complex plane and analysis of nonlinear equations. The work supports the general conclusion of previous small Peclet number analytical results of other investigators, though there are some discrepancies in details. It also addresses questions raised on the validity of selection theory owing to assumptions made on shape corrections at large distances from the tip.

  13. An analytical approach to γ-ray self-shielding effects for radioactive bodies encountered nuclear decommissioning scenarios.

    PubMed

    Gamage, K A A; Joyce, M J

    2011-10-01

    A novel analytical approach is described that accounts for self-shielding of γ radiation in decommissioning scenarios. The approach is developed with plutonium-239, cobalt-60 and caesium-137 as examples; stainless steel and concrete have been chosen as the media for cobalt-60 and caesium-137, respectively. The analytical methods have been compared MCNPX 2.6.0 simulations. A simple, linear correction factor relates the analytical results and the simulated estimates. This has the potential to greatly simplify the estimation of self-shielding effects in decommissioning activities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Effects of linear trends on estimation of noise in GNSS position time-series

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dmitrieva, K.; Segall, P.; Bradley, A. M.

    A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this study, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that themore » effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Finally, overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.« less

  15. Effects of linear trends on estimation of noise in GNSS position time-series

    NASA Astrophysics Data System (ADS)

    Dmitrieva, K.; Segall, P.; Bradley, A. M.

    2017-01-01

    A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this paper, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that the effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.

  16. Effects of linear trends on estimation of noise in GNSS position time-series

    DOE PAGES

    Dmitrieva, K.; Segall, P.; Bradley, A. M.

    2016-10-20

    A thorough understanding of time-dependent noise in Global Navigation Satellite System (GNSS) position time-series is necessary for computing uncertainties in any signals found in the data. However, estimation of time-correlated noise is a challenging task and is complicated by the difficulty in separating noise from signal, the features of greatest interest in the time-series. In this study, we investigate how linear trends affect the estimation of noise in daily GNSS position time-series. We use synthetic time-series to study the relationship between linear trends and estimates of time-correlated noise for the six most commonly cited noise models. We find that themore » effects of added linear trends, or conversely de-trending, vary depending on the noise model. The commonly adopted model of random walk (RW), flicker noise (FN) and white noise (WN) is the most severely affected by de-trending, with estimates of low-amplitude RW most severely biased. FN plus WN is least affected by adding or removing trends. Non-integer power-law noise estimates are also less affected by de-trending, but are very sensitive to the addition of trend when the spectral index is less than one. We derive an analytical relationship between linear trends and the estimated RW variance for the special case of pure RW noise. Finally, overall, we find that to ascertain the correct noise model for GNSS position time-series and to estimate the correct noise parameters, it is important to have independent constraints on the actual trends in the data.« less

  17. Commentary on "Distributed Revisiting: An Analytic for Retention of Coherent Science Learning"

    ERIC Educational Resources Information Center

    Hewitt, Jim

    2015-01-01

    The article, "Distributed Revisiting: An Analytic for Retention of Coherent Science Learning" is an interesting study that operates at the intersection of learning theory and learning analytics. The authors observe that the relationship between learning theory and research in the learning analytics field is constrained by several…

  18. ANALYTICAL ASSESSMENT OF THE IMPACTS OF PARTIAL MASS DEPLETION IN DNAPL SOURCE ZONES (SAN FRANCISCO, CA)

    EPA Science Inventory

    Analytical solutions describing the time-dependent DNAPL source-zone mass and contaminant discharge rate are used as a flux-boundary condition in a semi-analytical contaminant transport model. These analytical solutions assume a power relationship between the flow-averaged sourc...

  19. An analytical and experimental evaluation of a Fresnel lens solar concentrator

    NASA Technical Reports Server (NTRS)

    Hastings, L. J.; Allums, S. A.; Cosby, R. M.

    1976-01-01

    An analytical and experimental evaluation of line focusing Fresnel lenses with application potential in the 200 to 370 C range was studied. Analytical techniques were formulated to assess the solar transmission and imaging properties of a grooves down lens. Experimentation was based on a 56 cm wide, f/1.0 lens. A Sun tracking heliostat provided a nonmoving solar source. Measured data indicated more spreading at the profile base than analytically predicted, resulting in a peak concentration 18 percent lower than the computed peak of 57. The measured and computed transmittances were 85 and 87 percent, respectively. Preliminary testing with a subsequent lens indicated that modified manufacturing techniques corrected the profile spreading problem and should enable improved analytical experimental correlation.

  20. Self-Compassion and Relationship Maintenance: The Moderating Roles of Conscientiousness and Gender

    PubMed Central

    Baker, Levi; McNulty, James K.

    2010-01-01

    Should intimates respond to their interpersonal mistakes with self-criticism or with self-compassion? Although it is reasonable to expect self-compassion to benefit relationships by promoting self-esteem, it is also reasonable to expect self-compassion to hurt relationships by removing intimates’ motivation to correct their interpersonal mistakes. Two correlational studies, 1 experiment, and 1 longitudinal study demonstrated that whether self-compassion helps or hurts relationships depends on the presence versus absence of dispositional sources of the motivation to correct interpersonal mistakes. Among men, the implications of self-compassion were moderated by conscientiousness. Among men high in conscientiousness, self-compassion was associated with greater motivation to correct interpersonal mistakes (Studies 1 and 3), observations of more-constructive problem-solving behaviors (Study 2), reports of more accommodation (Study 3), and fewer declines in marital satisfaction that were mediated by decreases in interpersonal problem severity (Study 4); among men low in conscientiousness, self-compassion was associated with these outcomes in the opposite direction. Among women, in contrast, likely because women are inherently more motivated than men to preserve their relationships for cultural and/or biological reasons, self-compassion was never harmful to the relationship. Instead, women’s self-compassion was positively associated with the motivation to correct their interpersonal mistakes (Study 1) and changes in relationship satisfaction (Study 4), regardless of conscientiousness. Accordingly, theoretical descriptions of the implications of self-promoting thoughts for relationships may be most complete to the extent that they consider the presence versus absence of other sources of the motivation to correct interpersonal mistakes. PMID:21280964

  1. Self-compassion and relationship maintenance: the moderating roles of conscientiousness and gender.

    PubMed

    Baker, Levi R; McNulty, James K

    2011-05-01

    Should intimates respond to their interpersonal mistakes with self-criticism or with self-compassion? Although it is reasonable to expect self-compassion to benefit relationships by promoting self-esteem, it is also reasonable to expect self-compassion to hurt relationships by removing intimates' motivation to correct their interpersonal mistakes. Two correlational studies, 1 experiment, and 1 longitudinal study demonstrated that whether self-compassion helps or hurts relationships depends on the presence versus absence of dispositional sources of the motivation to correct interpersonal mistakes. Among men, the implications of self-compassion were moderated by conscientiousness. Among men high in conscientiousness, self-compassion was associated with greater motivation to correct interpersonal mistakes (Studies 1 and 3), observations of more constructive problem-solving behaviors (Study 2), reports of more accommodation (Study 3), and fewer declines in marital satisfaction that were mediated by decreases in interpersonal problem severity (Study 4); among men low in conscientiousness, self-compassion was associated with these outcomes in the opposite direction. Among women, in contrast, likely because women are inherently more motivated than men to preserve their relationships for cultural and/or biological reasons, self-compassion was never harmful to the relationship. Instead, women's self-compassion was positively associated with the motivation to correct their interpersonal mistakes (Study 1) and changes in relationship satisfaction (Study 4), regardless of conscientiousness. Accordingly, theoretical descriptions of the implications of self-promoting thoughts for relationships may be most complete to the extent that they consider the presence versus absence of other sources of the motivation to correct interpersonal mistakes. (c) 2011 APA, all rights reserved.

  2. Theoretical versus experimental results for the rotordynamic coefficients of eccentric, smooth, gas annular seal annular gas seals

    NASA Technical Reports Server (NTRS)

    Childs, Dara W.; Alexander, Chis

    1994-01-01

    This viewgraph presentation presents the following results: (1) The analytical results overpredict the experimental results for the direct stiffness values and incorrectly predict increasing stiffness with decreasing pressure ratios. (2) Theory correctly predicts increasing cross-coupled stiffness, K(sub YX), with increasing eccentricity and inlet preswirl. (3) Direct damping, C(sub XX), underpredicts the experimental results, but the analytical results do correctly show that damping increases with increasing eccentricity. (4) The whirl frequency values predicted by theory are insensitive to changes in the static eccentricity ratio. Although these values match perfectly with the experimental results at 16,000 rpm, the results at the lower speed do not correspond. (5) Theoretical and experimental mass flow rates match at 5000 rpm, but at 16,000 rpm the theoretical results overpredict the experimental mass flow rates. (6) Theory correctly shows the linear pressure profiles and the associated entrance losses with the specified rotor positions.

  3. Leading non-Gaussian corrections for diffusion orientation distribution function.

    PubMed

    Jensen, Jens H; Helpern, Joseph A; Tabesh, Ali

    2014-02-01

    An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed from the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves on the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. 2013 John Wiley & Sons, Ltd.

  4. Leading Non-Gaussian Corrections for Diffusion Orientation Distribution Function

    PubMed Central

    Jensen, Jens H.; Helpern, Joseph A.; Tabesh, Ali

    2014-01-01

    An analytical representation of the leading non-Gaussian corrections for a class of diffusion orientation distribution functions (dODFs) is presented. This formula is constructed out of the diffusion and diffusional kurtosis tensors, both of which may be estimated with diffusional kurtosis imaging (DKI). By incorporating model-independent non-Gaussian diffusion effects, it improves upon the Gaussian approximation used in diffusion tensor imaging (DTI). This analytical representation therefore provides a natural foundation for DKI-based white matter fiber tractography, which has potential advantages over conventional DTI-based fiber tractography in generating more accurate predictions for the orientations of fiber bundles and in being able to directly resolve intra-voxel fiber crossings. The formula is illustrated with numerical simulations for a two-compartment model of fiber crossings and for human brain data. These results indicate that the inclusion of the leading non-Gaussian corrections can significantly affect fiber tractography in white matter regions, such as the centrum semiovale, where fiber crossings are common. PMID:24738143

  5. First Steps in FAP: Experiences of Beginning Functional Analytic Psychotherapy Therapist with an Obsessive-Compulsive Personality Disorder Client

    ERIC Educational Resources Information Center

    Manduchi, Katia; Schoendorff, Benjamin

    2012-01-01

    Practicing Functional Analytic Psychotherapy (FAP) for the first time can seem daunting to therapists. Establishing a deep and intense therapeutic relationship, identifying FAP's therapeutic targets of clinically relevant behaviors, and using contingent reinforcement to help clients emit more functional behavior in the therapeutic relationship all…

  6. Reporting standards for Bland-Altman agreement analysis in laboratory research: a cross-sectional survey of current practice.

    PubMed

    Chhapola, Viswas; Kanwal, Sandeep Kumar; Brar, Rekha

    2015-05-01

    To carry out a cross-sectional survey of the medical literature on laboratory research papers published later than 2012 and available in the common search engines (PubMed, Google Scholar) on the quality of statistical reporting of method comparison studies using Bland-Altman (B-A) analysis. Fifty clinical studies were identified which had undertaken method comparison of laboratory analytes using B-A. The reporting of B-A was evaluated using a predesigned checklist with following six items: (1) correct representation of x-axis on B-A plot, (2) representation and correct definition of limits of agreement (LOA), (3) reporting of confidence interval (CI) of LOA, (4) comparison of LOA with a priori defined clinical criteria, (5) evaluation of the pattern of the relationship between difference (y-axis) and average (x-axis) and (6) measures of repeatability. The x-axis and LOA were presented correctly in 94%, comparison with a priori clinical criteria in 74%, CI reporting in 6%, evaluation of pattern in 28% and repeatability assessment in 38% of studies. There is incomplete reporting of B-A in published clinical studies. Despite its simplicity, B-A appears not to be completely understood by researchers, reviewers and editors of journals. There appear to be differences in the reporting of B-A between laboratory medicine journals and other clinical journals. A uniform reporting of B-A method will enhance the generalizability of results. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  7. Precise determination of the lutetium isotopic composition in rocks and minerals using multicollector ICPMS.

    PubMed

    Wimpenny, Josh B; Amelin, Yuri; Yin, Qing-Zhu

    2013-12-03

    Evidence of (176)Hf excess in select meteorites older than 4556Ma was suggested to be caused by excitation of long-lived natural radionuclide (176)Lu to its short-lived isomer (176m)Lu, due to an irradiation event during accretion in the early solar system. A result of this process would be a deficit in (176)Lu in irradiated samples by between 1‰ and 7‰. Previous measurements of the Lu isotope ratio in rock samples have not been of sufficient precision to resolve such a phenomenon. We present a new analytical technique designed to measure the (176)Lu/(175)Lu isotope ratio in rock samples to a precision of ~0.1‰ using a multicollector inductively coupled mass spectrometer (MC-ICPMS). To account for mass bias we normalized all unknowns to Ames Lu. To correct for any drift and instability associated with mass bias, all standards and samples are doped with W metal and normalized to the nominal W isotopic composition. Any instability in the mass bias is then corrected by characterizing the relationship between the fractionation factor of Lu and W, which is calculated at the start of every analytical session. After correction for isobaric interferences, in particular (176)Yb, we were able to measure (176)Lu/(175)Lu ratios in samples to a precision of ~0.1‰. However, these terrestrial standards were fractionated from Ames Lu by an average of 1.22 ± 0.09‰. This offset in (176)Lu/(175)Lu is probably caused by isotopic fractionation of Lu during industrial processing of the Ames Lu standard. To allow more straightforward data comparison we propose the use of NIST3130a as a bracketing standard in future studies. Relative to NIST3130a, the terrestrial standards have a final weighted mean δ(176)Lu value of 0.11 ± 0.09‰. All samples have uncertainties of better than 0.11‰; hence, our technique is fully capable of resolving any differences in δ(176)Lu of greater than 1‰.

  8. Design of a One-Dimensional Sextupole Using Semi-Analytic Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, L.; Nagaitsev, S.; Baturin, S. S.

    Sextupole magnets provide position-dependent momentum kicks and are tuned to provide the correct kicks to parti- cles within a small acceptance region in phase space. Sextupoles are useful and even necessary in circular accelerators for chromaticity corrections. They are routinely used in most rings, i.e. CESR. Although sextupole magnets are necessary for particle energy corrections, they also have undesirable effects on dynamic aperture, especially because of their non- linear coupling term in the momentum kick. Studies of integrable systems suggest that there is an analytic way to create transport lattices with specific transfer matrices that limit the momentum kick tomore » one dimension. A one-dimension sex- tupole is needed for chromaticity corrections: a horizontal sextupole for horizontal bending magnets. We know how to make a “composite” horizontal sextupole using regular 2D sextupoles and linear transfer matrices in an ideal thin-lens approximation. Thus, one could create an accelerator lattice using linear elements, in series with sextupole magnets to create a “1D sextupole”. This paper describes progress to- wards realizing a realistic focusing lattice resulting in a 1D sextupole.« less

  9. Scale Estimation and Correction of the Monocular Simultaneous Localization and Mapping (SLAM) Based on Fusion of 1D Laser Range Finder and Vision Data.

    PubMed

    Zhang, Zhuang; Zhao, Rujin; Liu, Enhai; Yan, Kun; Ma, Yuebo

    2018-06-15

    This article presents a new sensor fusion method for visual simultaneous localization and mapping (SLAM) through integration of a monocular camera and a 1D-laser range finder. Such as a fusion method provides the scale estimation and drift correction and it is not limited by volume, e.g., the stereo camera is constrained by the baseline and overcomes the limited depth range problem associated with SLAM for RGBD cameras. We first present the analytical feasibility for estimating the absolute scale through the fusion of 1D distance information and image information. Next, the analytical derivation of the laser-vision fusion is described in detail based on the local dense reconstruction of the image sequences. We also correct the scale drift of the monocular SLAM using the laser distance information which is independent of the drift error. Finally, application of this approach to both indoor and outdoor scenes is verified by the Technical University of Munich dataset of RGBD and self-collected data. We compare the effects of the scale estimation and drift correction of the proposed method with the SLAM for a monocular camera and a RGBD camera.

  10. An analytically linearized helicopter model with improved modeling accuracy

    NASA Technical Reports Server (NTRS)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  11. Changing the Guard: Male Correctional Officers' Attitudes toward Women as Co-Workers.

    ERIC Educational Resources Information Center

    Walters, Stephen

    1993-01-01

    Surveyed male correctional officers at four correctional facilities concerning their attitudes toward their role as correctional officers and corrections in general. Respondents (n=178) gave their attitudes toward working with women as correctional officers. Significantly related to "pro-women" attitudes were quality of working relationship with…

  12. Correction of Quenching Errors in Analytical Fluorimetry through Use of Time Resolution.

    DTIC Science & Technology

    1980-05-27

    QUENCHING ERRORS IN ANALYTICAL FLUORIMETRY THROUGH USE OF TIME RESOLUTION by Gary M. Hieftje and Gilbert R. Haugen Prepared for Publication in... HIEFTJE , 6 R HAUGEN NOCOIT1-6-0638 UCLASSIFIED TR-25 NL ///I//II IIIII I__I. 111122 Z .. ..12 1.~l8 .2 -4 SECuRITY CLSIIAI1 orTI PAGE MWhno. ee...in Analytical and Clinical Chemistry, vol. 3, D. M. Hercules, G. M. Hieftje , L. R. Snyder, and M4. A. Evenson, eds., Plenum Press, N.Y., 1978, ch. S

  13. Uncertainty of relative sensitivity factors in glow discharge mass spectrometry

    NASA Astrophysics Data System (ADS)

    Meija, Juris; Methven, Brad; Sturgeon, Ralph E.

    2017-10-01

    The concept of the relative sensitivity factors required for the correction of the measured ion beam ratios in pin-cell glow discharge mass spectrometry is examined in detail. We propose a data-driven model for predicting the relative response factors, which relies on a non-linear least squares adjustment and analyte/matrix interchangeability phenomena. The model provides a self-consistent set of response factors for any analyte/matrix combination of any element that appears as either an analyte or matrix in at least one known response factor.

  14. MO-DE-BRA-02: SIMAC: A Simulation Tool for Teaching Linear Accelerator Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlone, M; Harnett, N; Department of Radiation Oncology, University of Toronto, Toronto, Ontario

    Purpose: The first goal of this work is to develop software that can simulate the physics of linear accelerators (linac). The second goal is to show that this simulation tool is effective in teaching linac physics to medical physicists and linac service engineers. Methods: Linacs were modeled using analytical expressions that can correctly describe the physical response of a linac to parameter changes in real time. These expressions were programmed with a graphical user interface in order to produce an environment similar to that of linac service mode. The software, “SIMAC”, has been used as a learning aid in amore » professional development course 3 times (2014 – 2016) as well as in a physics graduate program. Exercises were developed to supplement the didactic components of the courses consisting of activites designed to reinforce the concepts of beam loading; the effect of steering coil currents on beam symmetry; and the relationship between beam energy and flatness. Results: SIMAC was used to teach 35 professionals (medical physicists; regulators; service engineers; 1 week course) as well as 20 graduate students (1 month project). In the student evaluations, 85% of the students rated the effectiveness of SIMAC as very good or outstanding, and 70% rated the software as the most effective part of the courses. Exercise results were collected showing that 100% of the students were able to use the software correctly. In exercises involving gross changes to linac operating points (i.e. energy changes) the majority of students were able to correctly perform these beam adjustments. Conclusion: Software simulation(SIMAC), can be used to effectively teach linac physics. In short courses, students were able to correctly make gross parameter adjustments that typically require much longer training times using conventional training methods.« less

  15. Next-to-leading-logarithmic power corrections for N -jettiness subtraction in color-singlet production

    NASA Astrophysics Data System (ADS)

    Boughezal, Radja; Isgrò, Andrea; Petriello, Frank

    2018-04-01

    We present a detailed derivation of the power corrections to the factorization theorem for the 0-jettiness event shape variable T . Our calculation is performed directly in QCD without using the formalism of effective field theory. We analytically calculate the next-to-leading logarithmic power corrections for small T at next-to-leading order in the strong coupling constant, extending previous computations which obtained only the leading-logarithmic power corrections. We address a discrepancy in the literature between results for the leading-logarithmic power corrections to a particular definition of 0-jettiness. We present a numerical study of the power corrections in the context of their application to the N -jettiness subtraction method for higher-order calculations, using gluon-fusion Higgs production as an example. The inclusion of the next-to-leading-logarithmic power corrections further improves the numerical efficiency of the approach beyond the improvement obtained from the leading-logarithmic power corrections.

  16. [Basic research on digital logistic management of hospital].

    PubMed

    Cao, Hui

    2010-05-01

    This paper analyzes and explores the possibilities of digital information-based management realized by equipment department, general services department, supply room and other material flow departments in different hospitals in order to optimize the procedures of information-based asset management. There are various analytical methods of medical supplies business models, providing analytical data for correct decisions made by departments and leaders of hospital and the governing authorities.

  17. Charge conservation in electronegativity equalization and its implications for the electrostatic properties of fluctuating-charge models.

    PubMed

    Chen, Jiahao; Martínez, Todd J

    2009-07-28

    An analytical solution of fluctuating-charge models using Gaussian elimination allows us to isolate the contribution of charge conservation effects in determining the charge distribution. We use this analytical solution to calculate dipole moments and polarizabilities and show that charge conservation plays a critical role in maintaining the correct translational invariance of the electrostatic properties predicted by these models.

  18. EFFECTS OF LASER RADIATION ON MATTER. LASER PLASMA: Spatial-temporal distribution of a mechanical load resulting from interaction of laser radiation with a barrier (analytic model)

    NASA Astrophysics Data System (ADS)

    Fedyushin, B. T.

    1992-01-01

    The concepts developed earlier are used to propose a simple analytic model describing the spatial-temporal distribution of a mechanical load (pressure, impulse) resulting from interaction of laser radiation with a planar barrier surrounded by air. The correctness of the model is supported by a comparison with experimental results.

  19. BiSet: Semantic Edge Bundling with Biclusters for Sensemaking.

    PubMed

    Sun, Maoyuan; Mi, Peng; North, Chris; Ramakrishnan, Naren

    2016-01-01

    Identifying coordinated relationships is an important task in data analytics. For example, an intelligence analyst might want to discover three suspicious people who all visited the same four cities. Existing techniques that display individual relationships, such as between lists of entities, require repetitious manual selection and significant mental aggregation in cluttered visualizations to find coordinated relationships. In this paper, we present BiSet, a visual analytics technique to support interactive exploration of coordinated relationships. In BiSet, we model coordinated relationships as biclusters and algorithmically mine them from a dataset. Then, we visualize the biclusters in context as bundled edges between sets of related entities. Thus, bundles enable analysts to infer task-oriented semantic insights about potentially coordinated activities. We make bundles as first class objects and add a new layer, "in-between", to contain these bundle objects. Based on this, bundles serve to organize entities represented in lists and visually reveal their membership. Users can interact with edge bundles to organize related entities, and vice versa, for sensemaking purposes. With a usage scenario, we demonstrate how BiSet supports the exploration of coordinated relationships in text analytics.

  20. A geovisual analytic approach to understanding geo-social relationships in the international trade network.

    PubMed

    Luo, Wei; Yin, Peifeng; Di, Qian; Hardisty, Frank; MacEachren, Alan M

    2014-01-01

    The world has become a complex set of geo-social systems interconnected by networks, including transportation networks, telecommunications, and the internet. Understanding the interactions between spatial and social relationships within such geo-social systems is a challenge. This research aims to address this challenge through the framework of geovisual analytics. We present the GeoSocialApp which implements traditional network analysis methods in the context of explicitly spatial and social representations. We then apply it to an exploration of international trade networks in terms of the complex interactions between spatial and social relationships. This exploration using the GeoSocialApp helps us develop a two-part hypothesis: international trade network clusters with structural equivalence are strongly 'balkanized' (fragmented) according to the geography of trading partners, and the geographical distance weighted by population within each network cluster has a positive relationship with the development level of countries. In addition to demonstrating the potential of visual analytics to provide insight concerning complex geo-social relationships at a global scale, the research also addresses the challenge of validating insights derived through interactive geovisual analytics. We develop two indicators to quantify the observed patterns, and then use a Monte-Carlo approach to support the hypothesis developed above.

  1. A Geovisual Analytic Approach to Understanding Geo-Social Relationships in the International Trade Network

    PubMed Central

    Luo, Wei; Yin, Peifeng; Di, Qian; Hardisty, Frank; MacEachren, Alan M.

    2014-01-01

    The world has become a complex set of geo-social systems interconnected by networks, including transportation networks, telecommunications, and the internet. Understanding the interactions between spatial and social relationships within such geo-social systems is a challenge. This research aims to address this challenge through the framework of geovisual analytics. We present the GeoSocialApp which implements traditional network analysis methods in the context of explicitly spatial and social representations. We then apply it to an exploration of international trade networks in terms of the complex interactions between spatial and social relationships. This exploration using the GeoSocialApp helps us develop a two-part hypothesis: international trade network clusters with structural equivalence are strongly ‘balkanized’ (fragmented) according to the geography of trading partners, and the geographical distance weighted by population within each network cluster has a positive relationship with the development level of countries. In addition to demonstrating the potential of visual analytics to provide insight concerning complex geo-social relationships at a global scale, the research also addresses the challenge of validating insights derived through interactive geovisual analytics. We develop two indicators to quantify the observed patterns, and then use a Monte-Carlo approach to support the hypothesis developed above. PMID:24558409

  2. Eliciting the Functional Processes of Apologizing for Errors in Health Care

    PubMed Central

    Prothero, Marie M.; Morse, Janice M.

    2017-01-01

    The purpose of this article was to analyze the concept development of apology in the context of errors in health care, the administrative response, policy and format/process of the subsequent apology. Using pragmatic utility and a systematic review of the literature, 29 articles and one book provided attributes involved in apologizing. Analytic questions were developed to guide the data synthesis and types of apologies used in different circumstances identified. The antecedents of apologizing, and the attributes and outcomes were identified. A model was constructed illustrating the components of a complete apology, other types of apologies, and ramifications/outcomes of each. Clinical implications of developing formal policies for correcting medical errors through apologies are recommended. Defining the essential elements of apology is the first step in establishing a just culture in health care. Respect for patient-centered care reduces the retaliate consequences following an error, and may even restore the physician patient relationship. PMID:28540337

  3. Optical pseudomotors for soft x-ray beamlines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedreira, P., E-mail: ppedreira@cells.es; Sics, I.; Sorrentino, A.

    2016-05-15

    Optical elements of soft x-ray beamlines usually have motorized translations and rotations that allow for the fine alignment of the beamline. This is to steer the photon beam at some positions and to correct the focus on slits or on sample. Generally, each degree of freedom of a mirror induces a change of several parameters of the beam. Inversely, several motions are required to actuate on a single optical parameter, keeping the others unchanged. We define optical pseudomotors as combinations of physical motions of the optical elements of a beamline, which allow modifying one optical parameter without affecting the others.more » We describe a method to obtain analytic relationships between physical motions of mirrors and the corresponding variations of the beam parameters. This method has been implemented and tested at two beamlines at ALBA, where it is used to control the focus of the photon beam and its position independently.« less

  4. Exact moments of the Sachdev-Ye-Kitaev model up to order 1 /N 2

    NASA Astrophysics Data System (ADS)

    García-García, Antonio M.; Jia, Yiyang; Verbaarschot, Jacobus J. M.

    2018-04-01

    We analytically evaluate the moments of the spectral density of the q-body Sachdev-Ye-Kitaev (SYK) model, and obtain order 1 /N 2 corrections for all moments, where N is the total number of Majorana fermions. To order 1 /N, moments are given by those of the weight function of the Q-Hermite polynomials. Representing Wick contractions by rooted chord diagrams, we show that the 1 /N 2 correction for each chord diagram is proportional to the number of triangular loops of the corresponding intersection graph, with an extra grading factor when q is odd. Therefore the problem of finding 1 /N 2 corrections is mapped to a triangle counting problem. Since the total number of triangles is a purely graph-theoretic property, we can compute them for the q = 1 and q = 2 SYK models, where the exact moments can be obtained analytically using other methods, and therefore we have solved the moment problem for any q to 1 /N 2 accuracy. The moments are then used to obtain the spectral density of the SYK model to order 1 /N 2. We also obtain an exact analytical result for all contraction diagrams contributing to the moments, which can be evaluated up to eighth order. This shows that the Q-Hermite approximation is accurate even for small values of N.

  5. Some Comments on Mapping from Disease-Specific to Generic Health-Related Quality-of-Life Scales

    PubMed Central

    Palta, Mari

    2013-01-01

    An article by Lu et al. in this issue of Value in Health addresses the mapping of treatment or group differences in disease-specific measures (DSMs) of health-related quality of life onto differences in generic health-related quality-of-life scores, with special emphasis on how the mapping is affected by the reliability of the DSM. In the proposed mapping, a factor analytic model defines a conversion factor between the scores as the ratio of factor loadings. Hence, the mapping applies to convert true underlying scales and has desirable properties facilitating the alignment of instruments and understanding their relationship in a coherent manner. It is important to note, however, that when DSM means or differences in mean DSMs are estimated, their mapping is still of a measurement error–prone predictor, and the correct conversion coefficient is the true mapping multiplied by the reliability of the DSM in the relevant sample. In addition, the proposed strategy for estimating the factor analytic mapping in practice requires assumptions that may not hold. We discuss these assumptions and how they may be the reason we obtain disparate estimates of the mapping factor in an application of the proposed methods to groups of patients. PMID:23337233

  6. Implementation of standardization in clinical practice: not always an easy task.

    PubMed

    Panteghini, Mauro

    2012-02-29

    As soon as a new reference measurement system is adopted, clinical validation of correctly calibrated commercial methods should take place. Tracing back the calibration of routine assays to a reference system can actually modify the relation of analyte results to existing reference intervals and decision limits and this may invalidate some of the clinical decision-making criteria currently used. To maintain the accumulated clinical experience, the quantitative relationship to the previous calibration system should be established and, if necessary, the clinical decision-making criteria should be adjusted accordingly. The implementation of standardization should take place in a concerted action of laboratorians, manufacturers, external quality assessment scheme organizers and clinicians. Dedicated meetings with manufacturers should be organized to discuss the process of assay recalibration and studies should be performed to obtain convincing evidence that the standardization works, improving result comparability. Another important issue relates to the surveillance of the performance of standardized assays through the organization of appropriate analytical internal and external quality controls. Last but not least, uncertainty of measurement that fits for this purpose must be defined across the entire traceability chain, starting with the available reference materials, extending through the manufacturers and their processes for assignment of calibrator values and ultimately to the final result reported to clinicians by laboratories.

  7. The color of complexes and UV-vis spectroscopy as an analytical tool of Alfred Werner's group at the University of Zurich.

    PubMed

    Fox, Thomas; Berke, Heinz

    2014-01-01

    Two PhD theses (Alexander Gordienko, 1912; Johannes Angerstein, 1914) and a dissertation in partial fulfillment of a PhD thesis (H. S. French, Zurich, 1914) are reviewed that deal with hitherto unpublished UV-vis spectroscopy work of coordination compounds in the group of Alfred Werner. The method of measurement of UV-vis spectra at Alfred Werner's time is described in detail. Examples of spectra of complexes are given, which were partly interpreted in terms of structure (cis ↔ trans configuration, counting number of bands for structural relationships, and shift of general spectral features by consecutive replacement of ligands). A more complete interpretation of spectra was hampered at Alfred Werner's time by the lack of a light absorption theory and a correct theory of electron excitation, and the lack of a ligand field theory for coordination compounds. The experimentally difficult data acquisitions and the difficult spectral interpretations might have been reasons why this method did not experience a breakthrough in Alfred Werner's group to play a more prominent role as an important analytical method. Nevertheless the application of UV-vis spectroscopy on coordination compounds was unique and novel, and witnesses Alfred Werner's great aptitude and keenness to always try and go beyond conventional practice.

  8. The Influence of Non-spectral Matrix Effects on the Accuracy of Isotope Ratio Measurement by MC-ICP-MS

    NASA Astrophysics Data System (ADS)

    Barling, J.; Shiel, A.; Weis, D.

    2006-12-01

    Non-spectral interferences in ICP-MS are caused by matrix elements effecting the ionisation and transmission of analyte elements. They are difficult to identify in MC-ICP-MS isotopic data because affected analyses exhibit normal mass dependent isotope fractionation. We have therefore investigated a wide range of matrix elements for both stable and radiogenic isotope systems using a Nu Plasma MC-ICP-MS. Matrix elements commonly enhance analyte sensitivity and change the instrumental mass bias experienced by analyte elements. These responses vary with element and therefore have important ramifications for the correction of data for instrumental mass bias by use of an external element (e.g. Pb and many non-traditional stable isotope systems). For Pb isotope measurements (Tl as mass bias element), Mg, Al, Ca, and Fe were investigated as matrix elements. All produced signal enhancement in Pb and Tl. Signal enhancement varied from session to session but for Ca and Al enhancement in Pb was less than for Tl while for Mg and Fe enhancement levels for Pb and Tl were similar. After correction for instrumental mass fractionation using Tl, Mg effected Pb isotope ratios were heavy (e.g. ^{208}Pb/204Pbmatrix > ^{208}Pb/204Pbtrue) for both moderate and high [Mg] while Ca effected Pb showed little change at moderate [Ca] but were light at high [Ca]. ^{208}Pb/204Pbmatrix - ^{208}Pb/204Pbtrue for all elements ranged from +0.0122 to - 0.0177. Isotopic shifts of similar magnitude are observed between Pb analyses of samples that have seen either one or two passes through chemistry (Nobre Silva et al, 2005). The double pass purified aliquots always show better reproducibility. These studies show that the presence of matrix can have a significant effect on the accuracy and reproducibility of replicate Pb isotope analyses. For non-traditional stable isotope systems (e.g. Mo(Zr), Cd(Ag)), the different responses of analyte and mass bias elements to the presence of matrix can result in del/amu for measured & mass bias corrected data that disagree outside of error. Either or both values can be incorrect. For samples, unlike experiments, the correct del/amu is not known in advance. Therefore, for sample analyses to be considered accurate, both measured and exponentially corrected del/amu should agree.

  9. Providing solid angle formalism for skyshine calculations.

    PubMed

    Gossman, Michael S; Pahikkala, A Jussi; Rising, Mary B; McGinley, Patton H

    2010-08-17

    We detail, derive and correct the technical use of the solid angle variable identified in formal guidance that relates skyshine calculations to dose-equivalent rate. We further recommend it for use with all National Council on Radiation Protection and Measurements (NCRP), Institute of Physics and Engineering in Medicine (IPEM) and similar reports documented. In general, for beams of identical width which have different resulting areas, within ± 1.0 % maximum deviation the analytical pyramidal solution is 1.27 times greater than a misapplied analytical conical solution through all field sizes up to 40 × 40 cm². Therefore, we recommend determining the exact results with the analytical pyramidal solution for square beams and the analytical conical solution for circular beams.

  10. Strain gage measurement errors in the transient heating of structural components

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance

    1993-01-01

    Significant strain-gage errors may exist in measurements acquired in transient thermal environments if conventional correction methods are applied. Conventional correction theory was modified and a new experimental method was developed to correct indicated strain data for errors created in radiant heating environments ranging from 0.6 C/sec (1 F/sec) to over 56 C/sec (100 F/sec). In some cases the new and conventional methods differed by as much as 30 percent. Experimental and analytical results were compared to demonstrate the new technique. For heating conditions greater than 6 C/sec (10 F/sec), the indicated strain data corrected with the developed technique compared much better to analysis than the same data corrected with the conventional technique.

  11. Insight solutions are correct more often than analytic solutions

    PubMed Central

    Salvi, Carola; Bricolo, Emanuela; Kounios, John; Bowden, Edward; Beeman, Mark

    2016-01-01

    How accurate are insights compared to analytical solutions? In four experiments, we investigated how participants’ solving strategies influenced their solution accuracies across different types of problems, including one that was linguistic, one that was visual and two that were mixed visual-linguistic. In each experiment, participants’ self-judged insight solutions were, on average, more accurate than their analytic ones. We hypothesised that insight solutions have superior accuracy because they emerge into consciousness in an all-or-nothing fashion when the unconscious solving process is complete, whereas analytic solutions can be guesses based on conscious, prematurely terminated, processing. This hypothesis is supported by the finding that participants’ analytic solutions included relatively more incorrect responses (i.e., errors of commission) than timeouts (i.e., errors of omission) compared to their insight responses. PMID:27667960

  12. The Opiliones tree of life: shedding light on harvestmen relationships through transcriptomics.

    PubMed

    Fernández, Rosa; Sharma, Prashant P; Tourinho, Ana Lúcia; Giribet, Gonzalo

    2017-02-22

    Opiliones are iconic arachnids with a Palaeozoic origin and a diversity that reflects ancient biogeographic patterns dating back at least to the times of Pangea. Owing to interest in harvestman diversity, evolution and biogeography, their relationships have been thoroughly studied using morphology and PCR-based Sanger approaches to infer their systematic relationships. More recently, two studies utilized transcriptomics-based phylogenomics to explore their basal relationships and diversification, but sampling was limiting for understanding deep evolutionary patterns, as they lacked good taxon representation at the family level. Here, we analysed a set of the 14 existing transcriptomes with 40 additional ones generated for this study, representing approximately 80% of the extant familial diversity in Opiliones. Our phylogenetic analyses, including a set of data matrices with different gene occupancy and evolutionary rates, and using a multitude of methods correcting for a diversity of factors affecting phylogenomic data matrices, provide a robust and stable Opiliones tree of life, where most families and higher taxa are precisely placed. Our dating analyses using alternative calibration points, methods and analytical parameters provide well-resolved old divergences, consistent with ancient regionalization in Pangea in some groups, and Pangean vicariance in others. The integration of state-of-the-art molecular techniques and analyses, together with the broadest taxonomic sampling to date presented in a phylogenomic study of harvestmen, provide new insights into harvestmen interrelationships, as well as an overview of the general biogeographic patterns of this ancient arthropod group. © 2017 The Author(s).

  13. The Opiliones tree of life: shedding light on harvestmen relationships through transcriptomics

    PubMed Central

    Sharma, Prashant P.; Tourinho, Ana Lúcia

    2017-01-01

    Opiliones are iconic arachnids with a Palaeozoic origin and a diversity that reflects ancient biogeographic patterns dating back at least to the times of Pangea. Owing to interest in harvestman diversity, evolution and biogeography, their relationships have been thoroughly studied using morphology and PCR-based Sanger approaches to infer their systematic relationships. More recently, two studies utilized transcriptomics-based phylogenomics to explore their basal relationships and diversification, but sampling was limiting for understanding deep evolutionary patterns, as they lacked good taxon representation at the family level. Here, we analysed a set of the 14 existing transcriptomes with 40 additional ones generated for this study, representing approximately 80% of the extant familial diversity in Opiliones. Our phylogenetic analyses, including a set of data matrices with different gene occupancy and evolutionary rates, and using a multitude of methods correcting for a diversity of factors affecting phylogenomic data matrices, provide a robust and stable Opiliones tree of life, where most families and higher taxa are precisely placed. Our dating analyses using alternative calibration points, methods and analytical parameters provide well-resolved old divergences, consistent with ancient regionalization in Pangea in some groups, and Pangean vicariance in others. The integration of state-of-the-art molecular techniques and analyses, together with the broadest taxonomic sampling to date presented in a phylogenomic study of harvestmen, provide new insights into harvestmen interrelationships, as well as an overview of the general biogeographic patterns of this ancient arthropod group. PMID:28228511

  14. A numerical study of the steady scalar convective diffusion equation for small viscosity

    NASA Technical Reports Server (NTRS)

    Giles, M. B.; Rose, M. E.

    1983-01-01

    A time-independent convection diffusion equation is studied by means of a compact finite difference scheme and numerical solutions are compared to the analytic inviscid solutions. The correct internal and external boundary layer behavior is observed, due to an inherent feature of the scheme which automatically produces upwind differencing in inviscid regions and the correct viscous behavior in viscous regions.

  15. Quantitative Electron Probe Microanalysis: State of the Art

    NASA Technical Reports Server (NTRS)

    Carpernter, P. K.

    2005-01-01

    Quantitative electron-probe microanalysis (EPMA) has improved due to better instrument design and X-ray correction methods. Design improvement of the electron column and X-ray spectrometer has resulted in measurement precision that exceeds analytical accuracy. Wavelength-dispersive spectrometer (WDS) have layered-dispersive diffraction crystals with improved light-element sensitivity. Newer energy-dispersive spectrometers (EDS) have Si-drift detector elements, thin window designs, and digital processing electronics with X-ray throughput approaching that of WDS Systems. Using these systems, digital X-ray mapping coupled with spectrum imaging is a powerful compositional mapping tool. Improvements in analytical accuracy are due to better X-ray correction algorithms, mass absorption coefficient data sets,and analysis method for complex geometries. ZAF algorithms have ban superceded by Phi(pz) algorithms that better model the depth distribution of primary X-ray production. Complex thin film and particle geometries are treated using Phi(pz) algorithms, end results agree well with Monte Carlo simulations. For geological materials, X-ray absorption dominates the corretions end depends on the accuracy of mass absorption coefficient (MAC) data sets. However, few MACs have been experimentally measured, and the use of fitted coefficients continues due to general success of the analytical technique. A polynomial formulation of the Bence-Albec alpha-factor technique, calibrated using Phi(pz) algorithms, is used to critically evaluate accuracy issues and can be also be used for high 2% relative and is limited by measurement precision for ideal cases, but for many elements the analytical accuracy is unproven. The EPMA technique has improved to the point where it is frequently used instead of the petrogaphic microscope for reconnaissance work. Examples of stagnant research areas are: WDS detector design characterization of calibration standards, and the need for more complete treatment of the continuum X-ray fluorescence correction.

  16. Neoclassical transport including collisional nonlinearity.

    PubMed

    Candy, J; Belli, E A

    2011-06-10

    In the standard δf theory of neoclassical transport, the zeroth-order (Maxwellian) solution is obtained analytically via the solution of a nonlinear equation. The first-order correction δf is subsequently computed as the solution of a linear, inhomogeneous equation that includes the linearized Fokker-Planck collision operator. This equation admits analytic solutions only in extreme asymptotic limits (banana, plateau, Pfirsch-Schlüter), and so must be solved numerically for realistic plasma parameters. Recently, numerical codes have appeared which attempt to compute the total distribution f more accurately than in the standard ordering by retaining some nonlinear terms related to finite-orbit width, while simultaneously reusing some form of the linearized collision operator. In this work we show that higher-order corrections to the distribution function may be unphysical if collisional nonlinearities are ignored.

  17. Second derivatives for approximate spin projection methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Lee M.; Hratchian, Hrant P., E-mail: hhratchian@ucmerced.edu

    2015-02-07

    The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical secondmore » derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.« less

  18. Analytic forms for cross sections of di-lepton production from e+e- collisions around the J/Ψ resonance

    NASA Astrophysics Data System (ADS)

    Zhou, Xing-Yu; Wang, Ya-Di; Xia, Li-Gang

    2017-08-01

    A detailed theoretical derivation of the cross sections of e+e- → e+e- and e+e- → μ + μ - around the J/ψ resonance is reported. The resonance and interference parts of the cross sections, related to J/ψ resonance parameters, are calculated. Higher-order corrections for vacuum polarization and initial-state radiation are considered. An arbitrary upper limit of radiative correction integration is involved. Full and simplified versions of analytic formulae are given with precision at the level of 0.1% and 0.2%, respectively. Moreover, the results obtained in the paper can be applied to the case of the ψ(3686) resonance. Supported by National Natural Science Foundation of China (11275211) and Istituto Nazionale di Fisica Nucleare, Italy

  19. Development of Internal Controls for the Luminex Instrument as Part of a Multiplex Seven-Analyte Viral Respiratory Antibody Profile

    PubMed Central

    Martins, Thomas B.

    2002-01-01

    The ability of the Luminex system to simultaneously quantitate multiple analytes from a single sample source has proven to be a feasible and cost-effective technology for assay development. In previous studies, my colleagues and I introduced two multiplex profiles consisting of 20 individual assays into the clinical laboratory. With the Luminex instrument’s ability to classify up to 100 distinct microspheres, however, we have only begun to realize the enormous potential of this technology. By utilizing additional microspheres, it is now possible to add true internal controls to each individual sample. During the development of a seven-analyte serologic viral respiratory antibody profile, internal controls for detecting sample addition and interfering rheumatoid factor (RF) were investigated. To determine if the correct sample was added, distinct microspheres were developed for measuring the presence of sufficient quantities of immunoglobulin G (IgG) or IgM in the diluted patient sample. In a multiplex assay of 82 samples, the IgM verification control correctly identified 23 out of 23 samples with low levels (<20 mg/dl) of this antibody isotype. An internal control microsphere for RF detected 30 out of 30 samples with significant levels (>10 IU/ml) of IgM RF. Additionally, RF-positive samples causing false-positive adenovirus and influenza A virus IgM results were correctly identified. By exploiting the Luminex instrument’s multiplexing capabilities, I have developed true internal controls to ensure correct sample addition and identify interfering RF as part of a respiratory viral serologic profile that includes influenza A and B viruses, adenovirus, parainfluenza viruses 1, 2, and 3, and respiratory syncytial virus. Since these controls are not assay specific, they can be incorporated into any serologic multiplex assay. PMID:11777827

  20. Development of internal controls for the Luminex instrument as part of a multiplex seven-analyte viral respiratory antibody profile.

    PubMed

    Martins, Thomas B

    2002-01-01

    The ability of the Luminex system to simultaneously quantitate multiple analytes from a single sample source has proven to be a feasible and cost-effective technology for assay development. In previous studies, my colleagues and I introduced two multiplex profiles consisting of 20 individual assays into the clinical laboratory. With the Luminex instrument's ability to classify up to 100 distinct microspheres, however, we have only begun to realize the enormous potential of this technology. By utilizing additional microspheres, it is now possible to add true internal controls to each individual sample. During the development of a seven-analyte serologic viral respiratory antibody profile, internal controls for detecting sample addition and interfering rheumatoid factor (RF) were investigated. To determine if the correct sample was added, distinct microspheres were developed for measuring the presence of sufficient quantities of immunoglobulin G (IgG) or IgM in the diluted patient sample. In a multiplex assay of 82 samples, the IgM verification control correctly identified 23 out of 23 samples with low levels (<20 mg/dl) of this antibody isotype. An internal control microsphere for RF detected 30 out of 30 samples with significant levels (>10 IU/ml) of IgM RF. Additionally, RF-positive samples causing false-positive adenovirus and influenza A virus IgM results were correctly identified. By exploiting the Luminex instrument's multiplexing capabilities, I have developed true internal controls to ensure correct sample addition and identify interfering RF as part of a respiratory viral serologic profile that includes influenza A and B viruses, adenovirus, parainfluenza viruses 1, 2, and 3, and respiratory syncytial virus. Since these controls are not assay specific, they can be incorporated into any serologic multiplex assay.

  1. The correspondence between Erich Neumann and C.G. Jung on the occasion of the November, progroms 1938 [corrected].

    PubMed

    Löwe, Angelica

    2015-06-01

    In the light of recently-published correspondence between Jung and Neumann, this paper considers and connects two aspects of their relationship: Jung's theory of an ethno-specific differentiation of the unconscious as formulated in 1934, and the relationship between Jung and Neumann at the beginning of the Holocaust in 1938-with Jung as the wise old man and a father figure on one hand, and Neumann as the apprentice and dependent son on the other. In examining these two issues, a detailed interpretation of four letters, two by Neumann and two by Jung, written in 1938 and 1939, is given. Neumann's reflections on the collective Jewish determination in the face of the November pogroms in 1938 led Jung to modify his view, with relativization and secularization of his former position. This shift precipitated a deep crisis with feelings of disorientation and desertion in Neumann; the paper discusses how a negative father complex was then constellated and imaged in a dream. After years of silence, the two men were able to renew the deep bonds that characterized their lifelong friendship. © 2015, The Society of Analytical Psychology.

  2. Full-wave acoustic and thermal modeling of transcranial ultrasound propagation and investigation of skull-induced aberration correction techniques: a feasibility study.

    PubMed

    Kyriakou, Adamos; Neufeld, Esra; Werner, Beat; Székely, Gábor; Kuster, Niels

    2015-01-01

    Transcranial focused ultrasound (tcFUS) is an attractive noninvasive modality for neurosurgical interventions. The presence of the skull, however, compromises the efficiency of tcFUS therapy, as its heterogeneous nature and acoustic characteristics induce significant distortion of the acoustic energy deposition, focal shifts, and thermal gain decrease. Phased-array transducers allow for partial compensation of skull-induced aberrations by application of precalculated phase and amplitude corrections. An integrated numerical framework allowing for 3D full-wave, nonlinear acoustic and thermal simulations has been developed and applied to tcFUS. Simulations were performed to investigate the impact of skull aberrations, the possibility of extending the treatment envelope, and adverse secondary effects. The simulated setup comprised an idealized model of the ExAblate Neuro and a detailed MR-based anatomical head model. Four different approaches were employed to calculate aberration corrections (analytical calculation of the aberration corrections disregarding tissue heterogeneities; a semi-analytical ray-tracing approach compensating for the presence of the skull; two simulation-based time-reversal approaches with and without pressure amplitude corrections which account for the entire anatomy). These impact of these approaches on the pressure and temperature distributions were evaluated for 22 brain-targets. While (semi-)analytical approaches failed to induced high pressure or ablative temperatures in any but the targets in the close vicinity of the geometric focus, simulation-based approaches indicate the possibility of considerably extending the treatment envelope (including targets below the transducer level and locations several centimeters off the geometric focus), generation of sharper foci, and increased targeting accuracy. While the prediction of achievable aberration correction appears to be unaffected by the detailed bone-structure, proper consideration of inhomogeneity is required to predict the pressure distribution for given steering parameters. Simulation-based approaches to calculate aberration corrections may aid in the extension of the tcFUS treatment envelope as well as predict and avoid secondary effects (standing waves, skull heating). Due to their superior performance, simulationbased techniques may prove invaluable in the amelioration of skull-induced aberration effects in tcFUS therapy. The next steps are to investigate shear-wave-induced effects in order to reliably exclude secondary hot-spots, and to develop comprehensive uncertainty assessment and validation procedures.

  3. Corrective Action Decision Document for Corrective Action Unit 563: Septic Systems, Nevada Test Site, Nevada, Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant Evenson

    2008-02-01

    This Corrective Action Decision Document has been prepared for Corrective Action Unit (CAU) 563, Septic Systems, in accordance with the Federal Facility Agreement and Consent Order (FFACO, 1996; as amended January 2007). The corrective action sites (CASs) for CAU 563 are located in Areas 3 and 12 of the Nevada Test Site, Nevada, and are comprised of the following four sites: •03-04-02, Area 3 Subdock Septic Tank •03-59-05, Area 3 Subdock Cesspool •12-59-01, Drilling/Welding Shop Septic Tanks •12-60-01, Drilling/Welding Shop Outfalls The purpose of this Corrective Action Decision Document is to identify and provide the rationale for the recommendation of a correctivemore » action alternative (CAA) for the four CASs within CAU 563. Corrective action investigation (CAI) activities were performed from July 17 through November 19, 2007, as set forth in the CAU 563 Corrective Action Investigation Plan (NNSA/NSO, 2007). Analytes detected during the CAI were evaluated against appropriate final action levels (FALs) to identify the contaminants of concern (COCs) for each CAS. The results of the CAI identified COCs at one of the four CASs in CAU 563 and required the evaluation of CAAs. Assessment of the data generated from investigation activities conducted at CAU 563 revealed the following: •CASs 03-04-02, 03-59-05, and 12-60-01 do not contain contamination at concentrations exceeding the FALs. •CAS 12-59-01 contains arsenic and chromium contamination above FALs in surface and near-surface soils surrounding a stained location within the site. Based on the evaluation of analytical data from the CAI, review of future and current operations at CAS 12-59-01, and the detailed and comparative analysis of the potential CAAs, the following corrective actions are recommended for CAU 563.« less

  4. "Cancer-Related Fatigue: A Systematic and Meta-Analytic Review of Nonpharmacological Therapies for Cancer Patients:" Correction to Kangas, Bovbjerg, and Montgomery (2008)

    ERIC Educational Resources Information Center

    Kangas, Maria; Bovbjerg, Dana H.; Montgomery, Guy H.

    2009-01-01

    Reports an error in "Cancer-related fatigue: A systematic and meta-analytic review of non-pharmacological therapies for cancer patients" by Maria Kangas, Dana H. Bovbjerg and Guy H. Montgomery (Psychological Bulletin, 2008[Sep], Vol 134[5], 700-741). The URL to the Supplemental Materials for the article is listed incorrectly in two places in the…

  5. Affective Teacher-Student Relationships and Students' Engagement and Achievement: A Meta-Analytic Update and Test of the Mediating Role of Engagement

    ERIC Educational Resources Information Center

    Roorda, Debora L.; Jak, Suzanne; Zee, Marjolein; Oort, Frans J.; Koomen, Helma M. Y.

    2017-01-01

    The present study took a meta-analytic approach to investigate whether students' engagement acts as a mediator in the association between affective teacher-student relationships and students' achievement. Furthermore, we examined whether results differed for primary and secondary school and whether similar results were found in a longitudinal…

  6. Validating Semi-analytic Models of High-redshift Galaxy Formation Using Radiation Hydrodynamical Simulations

    NASA Astrophysics Data System (ADS)

    Côté, Benoit; Silvia, Devin W.; O’Shea, Brian W.; Smith, Britton; Wise, John H.

    2018-05-01

    We use a cosmological hydrodynamic simulation calculated with Enzo and the semi-analytic galaxy formation model (SAM) GAMMA to address the chemical evolution of dwarf galaxies in the early universe. The long-term goal of the project is to better understand the origin of metal-poor stars and the formation of dwarf galaxies and the Milky Way halo by cross-validating these theoretical approaches. We combine GAMMA with the merger tree of the most massive galaxy found in the hydrodynamic simulation and compare the star formation rate, the metallicity distribution function (MDF), and the age–metallicity relationship predicted by the two approaches. We found that the SAM can reproduce the global trends of the hydrodynamic simulation. However, there are degeneracies between the model parameters, and more constraints (e.g., star formation efficiency, gas flows) need to be extracted from the simulation to isolate the correct semi-analytic solution. Stochastic processes such as bursty star formation histories and star formation triggered by supernova explosions cannot be reproduced by the current version of GAMMA. Non-uniform mixing in the galaxy’s interstellar medium, coming primarily from self-enrichment by local supernovae, causes a broadening in the MDF that can be emulated in the SAM by convolving its predicted MDF with a Gaussian function having a standard deviation of ∼0.2 dex. We found that the most massive galaxy in the simulation retains nearby 100% of its baryonic mass within its virial radius, which is in agreement with what is needed in GAMMA to reproduce the global trends of the simulation.

  7. (U) An Analytic Study of Piezoelectric Ejecta Mass Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tregillis, Ian Lee

    2017-02-16

    We consider the piezoelectric measurement of the areal mass of an ejecta cloud, for the specific case where ejecta are created by a single shock at the free surface and fly ballistically through vacuum to the sensor. To do so, we define time- and velocity-dependent ejecta “areal mass functions” at the source and sensor in terms of typically unknown distribution functions for the ejecta particles. Next, we derive an equation governing the relationship between the areal mass function at the source (which resides in the rest frame of the free surface) and at the sensor (which resides in the laboratorymore » frame). We also derive expressions for the analytic (“true”) accumulated ejecta mass at the sensor and the measured (“inferred”) value obtained via the standard method for analyzing piezoelectric voltage traces. This approach enables us to derive an exact expression for the error imposed upon a piezoelectric ejecta mass measurement (in a perfect system) by the assumption of instantaneous creation. We verify that when the ejecta are created instantaneously (i.e., when the time dependence is a delta function), the piezoelectric inference method exactly reproduces the correct result. When creation is not instantaneous, the standard piezo analysis will always overestimate the true mass. However, the error is generally quite small (less than several percent) for most reasonable velocity and time dependences. In some cases, errors exceeding 10-15% may require velocity distributions or ejecta production timescales inconsistent with experimental observations. These results are demonstrated rigorously with numerous analytic test problems.« less

  8. [Restoration of speech function in oncological patients with maxillary defects].

    PubMed

    Matiakin, E G; Chuchkov, V M; Akhundov, A A; Azizian, R I; Romanov, I S; Chuchkov, M V; Agapov, V V

    2009-01-01

    Speech quality was evaluated in 188 patients with acquired maxillary defects. Prosthetic treatment of 29 patients was preceded by pharmacopsychotherapy. Sixty three patients had lessons with a logopedist and 66 practiced self-tuition based on the specially developed test. Thirty patients were examined for the quality of speech without preliminary preparation. Speech quality was assessed by auditory and spectral analysis. The main forms of impaired speech quality in the patients with maxillary defects were marked rhinophonia and impaired articulation. The proposed analytical tests were based on a combination of "difficult" vowels and consonants. The use of a removable prostheses with an obturator failed to correct the affected speech function but created prerequisites for the formation of the correct speech stereotype. Results of the study suggest the relationship between the quality of speech in subjects with maxillary defects and their intellectual faculties as well as the desire to overcome this drawback. The proposed tests are designed to activate the neuromuscular apparatus responsible for the generation of the speech. Lessons with a speech therapist give a powerful emotional incentive to the patients and promote their efforts toward restoration of speaking ability. Pharmacopsychotherapy and self-control are another efficacious tools for the improvement of speech quality in patients with maxillary defects.

  9. Optical distortion correction of a liquid-gas interface and contact angle in cylindrical tubes

    NASA Astrophysics Data System (ADS)

    Darzi, Milad; Park, Chanwoo

    2017-05-01

    Objects inside cylindrical tubes appear distorted as seen outside the tube due to the refraction of the light passing through different media. Such an optical distortion may cause significant errors in geometrical measurements using optical observations of objects (e.g., liquid-gas interfaces, solid particles, gas bubbles) inside the tubes. In this study, an analytical method using a point-by-point correction of the optical distortion was developed. For an experimental validation, the method was used to correct the apparent profiles of the water-air interfaces (menisci) in cylindrical glass tubes with different tube diameters and wall thicknesses. Then, the corrected meniscus profiles were used to calculate the corrected static contact angles. The corrected contact angle shows an excellent agreement with the reference contact angles as compared to the conventional contact angle measurement using apparent meniscus profiles.

  10. Sierra/Aria 4.48 Verification Manual.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sierra Thermal Fluid Development Team

    Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.

  11. Improvement of analytical dynamic models using modal test data

    NASA Technical Reports Server (NTRS)

    Berman, A.; Wei, F. S.; Rao, K. V.

    1980-01-01

    A method developed to determine maximum changes in analytical mass and stiffness matrices to make them consistent with a set of measured normal modes and natural frequencies is presented. The corrected model will be an improved base for studies of physical changes, boundary condition changes, and for prediction of forced responses. The method features efficient procedures not requiring solutions of the eigenvalue problem, and the ability to have more degrees of freedom than the test data. In addition, modal displacements are obtained for all analytical degrees of freedom, and the frequency dependence of the coordinate transformations is properly treated.

  12. Chemical Sensor Array Response Modeling Using Quantitative Structure-Activity Relationships Technique

    NASA Astrophysics Data System (ADS)

    Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.

    We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.

  13. Why Consumers Misattribute Sponsorships to Non-Sponsor Brands: Differential Roles of Item and Relational Communications.

    PubMed

    Weeks, Clinton S; Humphreys, Michael S; Cornwell, T Bettina

    2018-02-01

    Brands engaged in sponsorship of events commonly have objectives that depend on consumer memory for the sponsor-event relationship (e.g., sponsorship awareness). Consumers however, often misattribute sponsorships to nonsponsor competitor brands, indicating erroneous memory for these relationships. The current research uses an item and relational memory framework to reveal sponsor brands may inadvertently foster this misattribution when they communicate relational linkages to events. Effects can be explained via differential roles of communicating item information (information that supports processing item distinctiveness) versus relational information (information that supports processing relationships among items) in contributing to memory outcomes. Experiment 1 uses event-cued brand recall to show that correct memory retrieval is best supported by communicating relational information when sponsorship relationships are not obvious (low congruence). In contrast, correct retrieval is best supported by communicating item information when relationships are obvious (high congruence). Experiment 2 uses brand-cued event recall to show that, against conventional marketing recommendations, relational information increases misattribution, whereas item information guards against misattribution. Results suggest sponsor brands must distinguish between item and relational communications to enhance correct retrieval and limit misattribution. Methodologically, the work shows that choice of cueing direction is critical in differentially revealing patterns of correct and incorrect retrieval with pair relationships. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. Selection and authentication of botanical materials for the development of analytical methods.

    PubMed

    Applequist, Wendy L; Miller, James S

    2013-05-01

    Herbal products, for example botanical dietary supplements, are widely used. Analytical methods are needed to ensure that botanical ingredients used in commercial products are correctly identified and that research materials are of adequate quality and are sufficiently characterized to enable research to be interpreted and replicated. Adulteration of botanical material in commerce is common for some species. The development of analytical methods for specific botanicals, and accurate reporting of research results, depend critically on correct identification of test materials. Conscious efforts must therefore be made to ensure that the botanical identity of test materials is rigorously confirmed and documented through preservation of vouchers, and that their geographic origin and handling are appropriate. Use of material with an associated herbarium voucher that can be botanically identified is always ideal. Indirect methods of authenticating bulk material in commerce, for example use of organoleptic, anatomical, chemical, or molecular characteristics, are not always acceptable for the chemist's purposes. Familiarity with botanical and pharmacognostic literature is necessary to determine what potential adulterants exist and how they may be distinguished.

  15. Pet Therapy in Correctional Institutions: A Perspective From Relational-Cultural Theory.

    PubMed

    Thomas, Rita; Matusitz, Jonathan

    2016-01-01

    In this article the authors apply Relational-Cultural Theory to pet therapy in correctional institutions. An important premise is that when pet therapy is used in prisons a symbiotic relationship develops between pets and prison inmates which, at the same time, improve their relationships with people themselves. Relational-Cultural Theory posits that relationships with individuals are not just a means to an end. Rather, good relationships promote growth and healthy development; they also cultivate reciprocal empathy. Hence, a major reason of suffering for most people is their experience of isolation; healing can occur in growth-fostering relationships.

  16. Assessment of HIV/AIDS comprehensive correct knowledge among Sudanese university: a cross-sectional analytic study 2014.

    PubMed

    Elbadawi, Abdulateef; Mirghani, Hyder

    2016-01-01

    Comprehensive correct HIV/AIDS knowledge (CCAK) is defined as correctly identify the two major ways of preventing the sexual transmission of HIV, and reject the most common misconceptions about HIV transmission. There are limited studies on this topic in Sudan. In this study we investigated the Comprehensive correct HIV/AIDS knowledge among Universities students. A cross-sectional analytic study was conducted among 556 students from two universities in 2014. Data were collected by using the self-administered pre-tested structured questionnaire. Chi-square was used for testing the significance and P. Value of ≥ 0.05 is considered as statistically significant. The majority (97.1%) of study subjects have heard about a disease called HIV/AIDS, while only 28.6% of them knew anyone who is infected with AIDS in the local community. Minority (13.8%) of students had CCAK however, males showed a better level of CCAK than females (OR = 2.77) with high significant statistical differences (P. Value = 0.001). Poor rate of CCAK among university students is noticed, especially among females. Almost half of students did not know preventive measures of HIV, nearly two thirds had misconception, about one third did not know the mode of transmission of HIV.

  17. Simulations of Dissipative Circular Restricted Three-body Problems Using the Velocity-scaling Correction Method

    NASA Astrophysics Data System (ADS)

    Wang, Shoucheng; Huang, Guoqing; Wu, Xin

    2018-02-01

    In this paper, we survey the effect of dissipative forces including radiation pressure, Poynting–Robertson drag, and solar wind drag on the motion of dust grains with negligible mass, which are subjected to the gravities of the Sun and Jupiter moving in circular orbits. The effect of the dissipative parameter on the locations of five Lagrangian equilibrium points is estimated analytically. The instability of the triangular equilibrium point L4 caused by the drag forces is also shown analytically. In this case, the Jacobi constant varies with time, whereas its integral invariant relation still provides a probability for the applicability of the conventional fourth-order Runge–Kutta algorithm combined with the velocity scaling manifold correction scheme. Consequently, the velocity-only correction method significantly suppresses the effects of artificial dissipation and a rapid increase in trajectory errors caused by the uncorrected one. The stability time of an orbit, regardless of whether it is chaotic or not in the conservative problem, is apparently longer in the corrected case than in the uncorrected case when the dissipative forces are included. Although the artificial dissipation is ruled out, the drag dissipation leads to an escape of grains. Numerical evidence also demonstrates that more orbits near the triangular equilibrium point L4 escape as the integration time increases.

  18. Data Analytics and Visualization for Large Army Testing Data

    DTIC Science & Technology

    2013-09-01

    and relationships in the data that would otherwise remain hidden. 7 Bibliography 1. Goodall , J. R.; Tesone, D. R. Visual Analytics for Network...Software Visualization, 2003, pp 143–149. 3. Goodall , J. R.; Sowul, M. VIAssist: Visual Analytics for Cyber Defense, IEEE Conference on Technologies

  19. NHEXAS PHASE I REGION 5 STUDY--METALS IN DUST ANALYTICAL RESULTS

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 1,906 dust samples. Dust samples were collected to assess potential residential sources of dermal and inhalation exposures and to examine relationships between analyte levels in dust and in personal and bioma...

  20. A new frequency approach for light flicker evaluation in electric power systems

    NASA Astrophysics Data System (ADS)

    Feola, Luigi; Langella, Roberto; Testa, Alfredo

    2015-12-01

    In this paper, a new analytical estimator for light flicker in frequency domain, which is able to take into account also the frequency components neglected by the classical methods proposed in literature, is proposed. The analytical solutions proposed apply for any generic stationary signal affected by interharmonic distortion. The light flicker analytical estimator proposed is applied to numerous numerical case studies with the goal of showing i) the correctness and the improvements of the analytical approach proposed with respect to the other methods proposed in literature and ii) the accuracy of the results compared to those obtained by means of the classical International Electrotechnical Commission (IEC) flickermeter. The usefulness of the proposed analytical approach is that it can be included in signal processing tools for interharmonic penetration studies for the integration of renewable energy sources in future smart grids.

  1. Multipollutant measurement error in air pollution epidemiology studies arising from predicting exposures with penalized regression splines

    PubMed Central

    Bergen, Silas; Sheppard, Lianne; Kaufman, Joel D.; Szpiro, Adam A.

    2016-01-01

    Summary Air pollution epidemiology studies are trending towards a multi-pollutant approach. In these studies, exposures at subject locations are unobserved and must be predicted using observed exposures at misaligned monitoring locations. This induces measurement error, which can bias the estimated health effects and affect standard error estimates. We characterize this measurement error and develop an analytic bias correction when using penalized regression splines to predict exposure. Our simulations show bias from multi-pollutant measurement error can be severe, and in opposite directions or simultaneously positive or negative. Our analytic bias correction combined with a non-parametric bootstrap yields accurate coverage of 95% confidence intervals. We apply our methodology to analyze the association of systolic blood pressure with PM2.5 and NO2 in the NIEHS Sister Study. We find that NO2 confounds the association of systolic blood pressure with PM2.5 and vice versa. Elevated systolic blood pressure was significantly associated with increased PM2.5 and decreased NO2. Correcting for measurement error bias strengthened these associations and widened 95% confidence intervals. PMID:27789915

  2. A curious metaphor. Engaging with trauma: an analytical perspective.

    PubMed

    Waldron, Sharn

    2010-02-01

    This paper explores the metaphor of in vitro fertilization in respect to those parts of the individual 'frozen' through early trauma. It describes the conditions necessary for the reintroduction of the frozen 'embryo' in the therapeutic relationship through use of an extended example. The role of the analytic relationship and the analyst's countertransference are highlighted within the context of implications for the process of therapy.

  3. An analytical and experimental evaluation of the plano-cylindrical Fresnel lens solar concentrator

    NASA Technical Reports Server (NTRS)

    Hastings, L. J.; Allums, S. L.; Cosby, R. M.

    1976-01-01

    Plastic Fresnel lenses for solar concentration are attractive because of potential for low-cost mass production. An analytical and experimental evaluation of line-focusing Fresnel lenses with application potential in the 200 to 370 C range is reported. Analytical techniques were formulated to assess the solar transmission and imaging properties of a grooves-down lens. Experimentation was based primarily on a 56 cm-wide lens with f-number 1.0. A sun-tracking heliostat provided a non-moving solar source. Measured data indicated more spreading at the profile base than analytically predicted. The measured and computed transmittances were 85 and 87% respectively. Preliminary testing with a second lens (1.85 m) indicated that modified manufacturing techniques corrected the profile spreading problem.

  4. Analytical approach for collective diffusion: One-dimensional lattice with the nearest neighbor and the next nearest neighbor lateral interactions

    NASA Astrophysics Data System (ADS)

    Tarasenko, Alexander

    2018-01-01

    Diffusion of particles adsorbed on a homogeneous one-dimensional lattice is investigated using a theoretical approach and MC simulations. The analytical dependencies calculated in the framework of approach are tested using the numerical data. The perfect coincidence of the data obtained by these different methods demonstrates that the correctness of the approach based on the theory of the non-equilibrium statistical operator.

  5. Recalibration of blood analytes over 25 years in the Atherosclerosis Risk in Communities Study: The impact of recalibration on chronic kidney disease prevalence and incidence

    PubMed Central

    Parrinello, Christina M.; Grams, Morgan E.; Couper, David; Ballantyne, Christie M.; Hoogeveen, Ron C.; Eckfeldt, John H.; Selvin, Elizabeth; Coresh, Josef

    2016-01-01

    Background Equivalence of laboratory tests over time is important for longitudinal studies. Even a small systematic difference (bias) can result in substantial misclassification. Methods We selected 200 Atherosclerosis Risk in Communities Study participants attending all 5 study visits over 25 years. Eight analytes were re-measured in 2011–13 from stored blood samples from multiple visits: creatinine, uric acid, glucose, total cholesterol, HDL-cholesterol, LDL-cholesterol, triglycerides, and high-sensitivity C-reactive protein. Original values were recalibrated to re-measured values using Deming regression. Differences >10% were considered to reflect substantial bias, and correction equations were applied to affected analytes in the total study population. We examined trends in chronic kidney disease (CKD) pre- and post-recalibration. Results Repeat measures were highly correlated with original values (Pearson’s r>0.85 after removing outliers [median 4.5% of paired measurements]), but 2 of 8 analytes (creatinine and uric acid) had differences >10%. Original values of creatinine and uric acid were recalibrated to current values using correction equations. CKD prevalence differed substantially after recalibration of creatinine (visits 1, 2, 4 and 5 pre-recalibration: 21.7%, 36.1%, 3.5%, 29.4%; post-recalibration: 1.3%, 2.2%, 6.4%, 29.4%). For HDL-cholesterol, the current direct enzymatic method differed substantially from magnesium dextran precipitation used during visits 1–4. Conclusions Analytes re-measured in samples stored for ~25 years were highly correlated with original values, but two of the 8 analytes showed substantial bias at multiple visits. Laboratory recalibration improved reproducibility of test results across visits and resulted in substantial differences in CKD prevalence. We demonstrate the importance of consistent recalibration of laboratory assays in a cohort study. PMID:25952043

  6. Detection limits of quantitative and digital PCR assays and their influence in presence-absence surveys of environmental DNA

    USGS Publications Warehouse

    Hunter, Margaret; Dorazio, Robert M.; Butterfield, John S.; Meigs-Friend, Gaia; Nico, Leo; Ferrante, Jason A.

    2017-01-01

    A set of universal guidelines is needed to determine the limit of detection (LOD) in PCR-based analyses of low concentration DNA. In particular, environmental DNA (eDNA) studies require sensitive and reliable methods to detect rare and cryptic species through shed genetic material in environmental samples. Current strategies for assessing detection limits of eDNA are either too stringent or subjective, possibly resulting in biased estimates of species’ presence. Here, a conservative LOD analysis grounded in analytical chemistry is proposed to correct for overestimated DNA concentrations predominantly caused by the concentration plateau, a nonlinear relationship between expected and measured DNA concentrations. We have used statistical criteria to establish formal mathematical models for both quantitative and droplet digital PCR. To assess the method, a new Grass Carp (Ctenopharyngodon idella) TaqMan assay was developed and tested on both PCR platforms using eDNA in water samples. The LOD adjustment reduced Grass Carp occupancy and detection estimates while increasing uncertainty – indicating that caution needs to be applied to eDNA data without LOD correction. Compared to quantitative PCR, digital PCR had higher occurrence estimates due to increased sensitivity and dilution of inhibitors at low concentrations. Without accurate LOD correction, species occurrence and detection probabilities based on eDNA estimates are prone to a source of bias that cannot be reduced by an increase in sample size or PCR replicates. Other applications also could benefit from a standardized LOD such as GMO food analysis, and forensic and clinical diagnostics.

  7. Comprehensive combinatory standard correction: a calibration method for handling instrumental drifts of gas chromatography-mass spectrometry systems.

    PubMed

    Deport, Coralie; Ratel, Jérémy; Berdagué, Jean-Louis; Engel, Erwan

    2006-05-26

    The current work describes a new method, the comprehensive combinatory standard correction (CCSC), for the correction of instrumental signal drifts in GC-MS systems. The method consists in analyzing together with the products of interest a mixture of n selected internal standards, and in normalizing the peak area of each analyte by the sum of standard areas and then, select among the summation operator sigma(p = 1)(n)C(n)p possible sums, the sum that enables the best product discrimination. The CCSC method was compared with classical techniques of data pre-processing like internal normalization (IN) or single standard correction (SSC) on their ability to correct raw data from the main drifts occurring in a dynamic headspace-gas chromatography-mass spectrometry system. Three edible oils with closely similar compositions in volatile compounds were analysed using a device which performance was modulated by using new or used dynamic headspace traps and GC-columns, and by modifying the tuning of the mass spectrometer. According to one-way ANOVA, the CCSC method increased the number of analytes discriminating the products (31 after CCSC versus 25 with raw data or after IN and 26 after SSC). Moreover, CCSC enabled a satisfactory discrimination of the products irrespective of the drifts. In a factorial discriminant analysis, 100% of the samples (n = 121) were well-classified after CCSC versus 45% for raw data, 90 and 93%, respectively after IN and SSC.

  8. Affect, Reason, and Persuasion: Advertising Strategies That Predict Affective and Analytic-Cognitive Responses.

    ERIC Educational Resources Information Center

    Chaudhuri, Arjun; Buck, Ross

    1995-01-01

    Develops and tests hypotheses concerning the relationship of specific advertising strategies to affective and analytic cognitive responses of the audience. Analyses undergraduate students' responses to 240 advertisements. Demonstrates that advertising strategy variables accounted substantially for the variance in affective and analytic cognition.…

  9. Optimization of analytical parameters for inferring relationships among Escherichia coli isolates from repetitive-element PCR by maximizing correspondence with multilocus sequence typing data.

    PubMed

    Goldberg, Tony L; Gillespie, Thomas R; Singer, Randall S

    2006-09-01

    Repetitive-element PCR (rep-PCR) is a method for genotyping bacteria based on the selective amplification of repetitive genetic elements dispersed throughout bacterial chromosomes. The method has great potential for large-scale epidemiological studies because of its speed and simplicity; however, objective guidelines for inferring relationships among bacterial isolates from rep-PCR data are lacking. We used multilocus sequence typing (MLST) as a "gold standard" to optimize the analytical parameters for inferring relationships among Escherichia coli isolates from rep-PCR data. We chose 12 isolates from a large database to represent a wide range of pairwise genetic distances, based on the initial evaluation of their rep-PCR fingerprints. We conducted MLST with these same isolates and systematically varied the analytical parameters to maximize the correspondence between the relationships inferred from rep-PCR and those inferred from MLST. Methods that compared the shapes of densitometric profiles ("curve-based" methods) yielded consistently higher correspondence values between data types than did methods that calculated indices of similarity based on shared and different bands (maximum correspondences of 84.5% and 80.3%, respectively). Curve-based methods were also markedly more robust in accommodating variations in user-specified analytical parameter values than were "band-sharing coefficient" methods, and they enhanced the reproducibility of rep-PCR. Phylogenetic analyses of rep-PCR data yielded trees with high topological correspondence to trees based on MLST and high statistical support for major clades. These results indicate that rep-PCR yields accurate information for inferring relationships among E. coli isolates and that accuracy can be enhanced with the use of analytical methods that consider the shapes of densitometric profiles.

  10. Corrections beyond the leading order in π{sup 0} → e{sup +}e{sup −} process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Husek, T.; Kampf, K.; Novotný, J.

    2016-01-22

    We briefly summarize experimental and theoretical results on the rare decay π{sup 0} → e{sup +}e{sup −}. Two-loop QED corrections are reviewed and the bremsstrahlung contribution beyond the soft-photon approximation is analytically calculated. Using the leading logarithm approximation, the possible contribution of QCD corrections is estimated. The complete result can be used to fit the value of the contact interaction coupling χ{sup (r)} to the recent KTeV experiment with the result χ{sup (r)}(M{sub ρ}) = 4.5±1.0.

  11. Study of the atmospheric effects on the radiation detected by the sensor aboard orbiting platforms (ERTS/LANDSAT). M.S. Thesis - October 1978; [Ribeirao Preto and Brasilia, Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Morimoto, T.

    1980-01-01

    The author has identified the following significant results. Multispectral scanner data for Brasilia was corrected for atmospheric interference using the LOWTRAN-3 computer program and the analytical solution of the radiative transfer equation. This improved the contrast between two natural targets and the corrected images of two different dates were more similar than the original ones. Corrected images of MSS data for Ribeirao Preto gave a classification accuracy for sugar cane about 10% higher as compared to the original images.

  12. Active vibration control with model correction on a flexible laboratory grid structure

    NASA Technical Reports Server (NTRS)

    Schamel, George C., II; Haftka, Raphael T.

    1991-01-01

    This paper presents experimental and computational comparisons of three active damping control laws applied to a complex laboratory structure. Two reduced structural models were used with one model being corrected on the basis of measured mode shapes and frequencies. Three control laws were investigated, a time-invariant linear quadratic regulator with state estimation and two direct rate feedback control laws. Experimental results for all designs were obtained with digital implementation. It was found that model correction improved the agreement between analytical and experimental results. The best agreement was obtained with the simplest direct rate feedback control.

  13. A microfluidic paper-based analytical device for the assay of albumin-corrected fructosamine values from whole blood samples.

    PubMed

    Boonyasit, Yuwadee; Laiwattanapaisal, Wanida

    2015-01-01

    A method for acquiring albumin-corrected fructosamine values from whole blood using a microfluidic paper-based analytical system that offers substantial improvement over previous methods is proposed. The time required to quantify both serum albumin and fructosamine is shortened to 10 min with detection limits of 0.50 g dl(-1) and 0.58 mM, respectively (S/N = 3). The proposed system also exhibited good within-run and run-to-run reproducibility. The results of the interference study revealed that the acceptable recoveries ranged from 95.1 to 106.2%. The system was compared with currently used large-scale methods (n = 15), and the results demonstrated good agreement among the techniques. The microfluidic paper-based system has the potential to continuously monitor glycemic levels in low resource settings.

  14. Wind tunnel-sidewall-boundary-layer effects in transonic airfoil testing-some correctable, but some not

    NASA Technical Reports Server (NTRS)

    Lynch, F. T.; Johnson, C. B.

    1988-01-01

    The need to correct transonic airfoil wind tunnel test data for the influence of the tunnel sidewall boundary layers, in addition to the wall accepted corrections for the analytical investigation was carried out in order to evaluate sidewall boundary layer effects on transonic airfoil characteristics, and to validate proposed correction and the limit to their applications. This investigation involved testing of modern airfoil configurations in two different transonic airfoil test facilities, the 15 x 60 inch two-dimensional insert of the National Aeronautical Establishment (NAE) 5 foot tunnel in Ottawa, Canada, and the two-dimensional test section of the NASA Langley 0.3 m Transonic Cryogenic Tunnel (TCT). Results presented included effects of variations in sidewall-boundary layer bleed in both facilities, different sidewall boundary layer correction procedures, tunnel-to tunnel comparisons of correcte results, and flow conditions with and without separation.

  15. Reactive correction of a maxillary incisor in single-tooth crossbite following periodontal therapy.

    PubMed

    Huang, Chih-Hao; Brunsvold, Michael A

    2005-05-01

    The reactive correction of a single tooth anterior crossbite following periodontal therapy is described. This case report provides new information regarding correction of a crossbite relationship and con- firms existing reports of tooth movement following periodontal therapy. A 39-year-old woman in good general health presented with a history of recurrent periodontal abscesses of a maxillary incisor. Probing depths of the abscessed tooth ranged from 5 to 12 mm, and class 1 mobility was noted. Radiographs revealed that the tooth had previously been treated endodontically. The patient's periodontal diagnosis was generalized chronic moderate to severe periodontitis. Treatment considerations were complicated by a single-tooth crossbite relationship of the involved incisor and clinical evidence that the periodontal abscess communicated with an apical infection. Treatment of the abscess consisted of cause-related therapy, bone grafting, and occlusal adjustment. Five months after surgical treatment, an edge-to-edge incisal relationship was observed, the first indicator of tooth movement. Further correction to a normal incisal relationship resulted 1 year after modification of the proximal contact. At this time, there was normal probing depth with only slight recession and mobility. Bone fill was radiographically noted. It appears that some cases of maxillary incisor crossbite that are complicated by periodontal disease may be corrected, without orthodontic appliances, following periodontal treatment.

  16. Third-rank chromatic aberrations of electron lenses.

    PubMed

    Liu, Zhixiong

    2018-02-01

    In this paper the third-rank chromatic aberration coefficients of round electron lenses are analytically derived and numerically calculated by Mathematica. Furthermore, the numerical results are cross-checked by the differential algebraic (DA) method, which verifies that all the formulas for the third-rank chromatic aberration coefficients are completely correct. It is hoped that this work would be helpful for further chromatic aberration correction in electron microscopy. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Synergistic relationships between Analytical Chemistry and written standards.

    PubMed

    Valcárcel, Miguel; Lucena, Rafael

    2013-07-25

    This paper describes the mutual impact of Analytical Chemistry and several international written standards (norms and guides) related to knowledge management (CEN-CWA 14924:2004), social responsibility (ISO 26000:2010), management of occupational health and safety (OHSAS 18001/2), environmental management (ISO 14001:2004), quality management systems (ISO 9001:2008) and requirements of the competence of testing and calibration laboratories (ISO 17025:2004). The intensity of this impact, based on a two-way influence, is quite different depending on the standard considered. In any case, a new and fruitful approach to Analytical Chemistry based on these relationships can be derived. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Approximate analytical relationships for linear optimal aeroelastic flight control laws

    NASA Astrophysics Data System (ADS)

    Kassem, Ayman Hamdy

    1998-09-01

    This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.

  19. Change in Minimum Orbit Intersection Distance due to General Relativistic Precession in Small Solar System Bodies

    NASA Astrophysics Data System (ADS)

    Sekhar, Aswin; Valsecchi, Giovanni B.; Asher, David; Werner, Stephanie; Vaubaillon, Jeremie; Li, Gongjie

    2017-06-01

    One of the greatest successes of Einstein's General Theory of Relativity (GR) was the correct prediction of the perihelion precession of Mercury. The closed form expression to compute this precession tells us that substantial GR precession would occur only if the bodies have a combination of both moderately small perihelion distance and semi-major axis. Minimum Orbit Intersection Distance (MOID) is a quantity which helps us to understand the closest proximity of two orbits in space. Hence evaluating MOID is crucial to understand close encounters and collision scenarios better. In this work, we look at the possible scenarios where a small GR precession in argument of pericentre can create substantial changes in MOID for small bodies ranging from meteoroids to comets and asteroids.Previous works have looked into neat analytical techniques to understand different collision scenarios and we use those standard expressions to compute MOID analytically. We find the nature of this mathematical function is such that a relatively small GR precession can lead to drastic changes in MOID values depending on the initial value of argument of pericentre. Numerical integrations were done with the MERCURY package incorporating GR code to test the same effects. A numerical approach showed the same interesting relationship (as shown by analytical theory) between values of argument of pericentre and the peaks or dips in MOID values. There is an overall agreement between both analytical and numerical methods.We find that GR precession could play an important role in the calculations pertaining to MOID and close encounter scenarios in the case of certain small solar system bodies (depending on their initial orbital elements) when long term impact risk possibilities are considered. Previous works have looked into impact probabilities and collision scenarios on planets from different small body populations. This work aims to find certain sub-sets of small bodies where GR could play an interesting role. Certain parallels are drawn between the cases of asteroids, comets and small perihelion distance meteoroid streams.

  20. A Development Strategy for Creating a Suite of Reference Materials for the in-situ Microanalysis of Non-conventional Raw Materials

    NASA Astrophysics Data System (ADS)

    Renno, A. D.; Merchel, S.; Michalak, P. P.; Munnik, F.; Wiedenbeck, M.

    2010-12-01

    Recent economic trends regarding the supply of rare metals readily justify scientific research into non-conventional raw materials, where a particular need is a better understanding of the relationship between mineralogy, microstructure and the distribution of key metals within ore deposits (geometallurgy). Achieving these goals will require an extensive usage of in-situ microanalytical techniques capable of spatially resolving material heterogeneities which can be key for understanding better resource utilization. The availability of certified reference materials (CRMs) is an essential prerequisite for (1) validating new analytical methods, (2) demonstrating data quality to the contracting authorities, (3) supporting method development and instrument calibration, and (4) establishing traceability between new analytical approaches and existing data sets. This need has led to the granting of funding by the European Union and the German Free State of Saxony for a program to develop such reference materials . This effort will apply the following strategies during the selection of the phases: (1) will use exclusively synthetic minerals, thereby providing large volumes of homogeneous starting material. (2) will focus on matrices which are capable of incorporating many ‘important’ elements while avoid exotic compositions which would not be optimal matrix matches. (3) will emphasise those phases which remain stable during the various microanalytical procedure. This initiative will assess the homogeneity of the reference materials at sampling sizes ranging between 50 and 1 µm; it is also intended to document crystal structural homogeneity too, as this too may potentially impact specific analytical methods. As far as possible both definitive methods as well as methods involving matrix corrections will be used for determining the compositions of the of the individual materials. A critical challenge will be the validation of the determination of analytes concentrations as sub-µg sampling masses. It is planned to cooperate with those who are interested in the development of such reference materials and we invite them to take part in round-robin exercises.

  1. Analyte species and concentration identification using differentially functionalized microcantilever arrays and artificial neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senesac, Larry R; Datskos, Panos G; Sepaniak, Michael J

    2006-01-01

    In the present work, we have performed analyte species and concentration identification using an array of ten differentially functionalized microcantilevers coupled with a back-propagation artificial neural network pattern recognition algorithm. The array consists of ten nanostructured silicon microcantilevers functionalized by polymeric and gas chromatography phases and macrocyclic receptors as spatially dense, differentially responding sensing layers for identification and quantitation of individual analyte(s) and their binary mixtures. The array response (i.e. cantilever bending) to analyte vapor was measured by an optical readout scheme and the responses were recorded for a selection of individual analytes as well as several binary mixtures. Anmore » artificial neural network (ANN) was designed and trained to recognize not only the individual analytes and binary mixtures, but also to determine the concentration of individual components in a mixture. To the best of our knowledge, ANNs have not been applied to microcantilever array responses previously to determine concentrations of individual analytes. The trained ANN correctly identified the eleven test analyte(s) as individual components, most with probabilities greater than 97%, whereas it did not misidentify an unknown (untrained) analyte. Demonstrated unique aspects of this work include an ability to measure binary mixtures and provide both qualitative (identification) and quantitative (concentration) information with array-ANN-based sensor methodologies.« less

  2. Quantitative assessment of prevalence of pre-analytical variables and their effect on coagulation assay. Can intervention improve patient safety?

    PubMed

    Bhushan, Ravi; Sen, Arijit

    2017-04-01

    Very few Indian studies exist on evaluation of pre-analytical variables affecting "Prothrombin Time" the commonest coagulation assay performed. The study was performed in an Indian tertiary care setting with an aim to assess quantitatively the prevalence of pre-analytical variables and their effects on the results (patient safety), for Prothrombin time test. The study also evaluated their effects on the result and whether intervention, did correct the results. The firstly evaluated the prevalence for various pre-analytical variables detected in samples sent for Prothrombin Time testing. These samples with the detected variables wherever possible were tested and result noted. The samples from the same patients were repeated and retested ensuring that no pre-analytical variable is present. The results were again noted to check for difference the intervention produced. The study evaluated 9989 samples received for PT/INR over a period of 18 months. The prevalence of different pre-analytical variables was found to be 862 (8.63%). The proportion of various pre-analytical variables detected were haemolysed samples 515 (5.16%), over filled vacutainers 62 (0.62%), under filled vacutainers 39 (0.39%), low values 205 (2.05%), clotted samples 11 (0.11%), wrong labeling 4 (0.04%), wrong vacutainer use 2 (0.02%), chylous samples 7 (0.07%) and samples with more than one variable 17 (0.17%). The comparison of percentage of samples showing errors were noted for the first variables since they could be tested with and without the variable in place. The reduction in error percentage was 91.5%, 69.2%, 81.5% and 95.4% post intervention for haemolysed, overfilled, under filled and samples collected with excess pressure at phlebotomy respectively. Correcting the variables did reduce the error percentage to a great extent in these four variables and hence the variables are found to affect "Prothrombin Time" testing and can hamper patient safety.

  3. Importance of implementing an analytical quality control system in a core laboratory.

    PubMed

    Marques-Garcia, F; Garcia-Codesal, M F; Caro-Narros, M R; Contreras-SanFeliciano, T

    2015-01-01

    The aim of the clinical laboratory is to provide useful information for screening, diagnosis and monitoring of disease. The laboratory should ensure the quality of extra-analytical and analytical process, based on set criteria. To do this, it develops and implements a system of internal quality control, designed to detect errors, and compare its data with other laboratories, through external quality control. In this way it has a tool to detect the fulfillment of the objectives set, and in case of errors, allowing corrective actions to be made, and ensure the reliability of the results. This article sets out to describe the design and implementation of an internal quality control protocol, as well as its periodical assessment intervals (6 months) to determine compliance with pre-determined specifications (Stockholm Consensus(1)). A total of 40 biochemical and 15 immunochemical methods were evaluated using three different control materials. Next, a standard operation procedure was planned to develop a system of internal quality control that included calculating the error of the analytical process, setting quality specifications, and verifying compliance. The quality control data were then statistically depicted as means, standard deviations, and coefficients of variation, as well as systematic, random, and total errors. The quality specifications were then fixed and the operational rules to apply in the analytical process were calculated. Finally, our data were compared with those of other laboratories through an external quality assurance program. The development of an analytical quality control system is a highly structured process. This should be designed to detect errors that compromise the stability of the analytical process. The laboratory should review its quality indicators, systematic, random and total error at regular intervals, in order to ensure that they are meeting pre-determined specifications, and if not, apply the appropriate corrective actions. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  4. [Pre-analytical quality in fluid samples cytopathology: Results of a survey from the French Society of Clinical Cytology].

    PubMed

    Courtade-Saïdi, Monique; Fleury Feith, Jocelyne

    2015-10-01

    The pre-analytical step includes sample collection, preparation, transportation and storage in the pathology unit where the diagnosis is performed. The pathologist ensures that pre-analytical conditions are in line with expectations. The lack of standardization for handling cytological samples makes this pre-analytical step difficult to harmonize. Moreover, this step depends on the nature of the sample: fresh liquid or fixed material, air-dried smears, liquid-based cytology. The aim of the study was to review the different practices in French structures of pathology on the pre-analytical phase concerning cytological fluids such as broncho-alveolar lavage (BALF), serous fluids and urine. A survey was conducted on the basis of the pre-analytical chapter of the ISO 15189 and sent to 191 French pathological structures (105 public and 86 private). Fifty-six laboratories replied to the survey. Ninety-five per cent have a computerized management system and 70% a manual on sample handling. The general instructions requested for the patients and sample identification were highly correctly filled with a short time routing and additional tests prescription. By contrast, information are variable concerning the clinical information requested and the type of tubes for collecting fluids and the volumes required as well as the actions taken in case of non-conformity. For the specific items concerning BALF, serous fluids and urine, this survey has shown a great heterogeneity according to sample collection, fixation and of clinical information. This survey demonstrates that the pre-analytical quality for BALF, serous fluids and urine is not optimal and that some corrections of the practices are recommended with a standardization of numerous steps in order to increase the reproducibility of additional tests such as immunocytochemistry, cytogenetic and molecular biology. Some recommendations have been written. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  5. Do impression management and self-deception distort self-report measures with content of dynamic risk factors in offender samples? A meta-analytic review.

    PubMed

    Hildebrand, Martin; Wibbelink, Carlijn J M; Verschuere, Bruno

    Self-report measures provide an important source of information in correctional/forensic settings, yet at the same time the validity of that information is often questioned because self-reports are thought to be highly vulnerable to self-presentation biases. Primary studies in offender samples have provided mixed results with regard to the impact of socially desirable responding on self-reports. The main aim of the current study was therefore to investigate-via a meta-analytic review of published studies-the association between the two dimensions of socially desirable responding, impression management and self-deceptive enhancement, and self-report measures with content of dynamic risk factors using the Balanced Inventory of Desirable Responding (BIDR) in offender samples. These self-report measures were significantly and negatively related with self-deception (r = -0.120, p < 0.001; k = 170 effect sizes) and impression management (r = -0.158, p < 0.001; k = 157 effect sizes), yet there was evidence of publication bias for the impression management effect with the trim and fill method indicating that the relation is probably even smaller (r = -0.07). The magnitude of the effect sizes was small. Moderation analyses suggested that type of dynamic risk factor (e.g., antisocial cognition versus antisocial personality), incentives, and publication year affected the relationship between impression management and self-report measures with content of dynamic risk factors, whereas sample size, setting (e.g., incarcerated, community), and publication year influenced the relation between self-deception and these self-report measures. The results indicate that the use of self-report measures to assess dynamic risk factors in correctional/forensic settings is not inevitably compromised by socially desirable responding, yet caution is warranted for some risk factors (antisocial personality traits), particularly when incentives are at play. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Analytical electron microscopy in mineralogy; exsolved phases in pyroxenes

    USGS Publications Warehouse

    Nord, G.L.

    1982-01-01

    Analytical scanning transmission electron microscopy has been successfully used to characterize the structure and composition of lamellar exsolution products in pyroxenes. At operating voltages of 100 and 200 keV, microanalytical techniques of x-ray energy analysis, convergent-beam electron diffraction, and lattice imaging have been used to chemically and structurally characterize exsolution lamellae only a few unit cells wide. Quantitative X-ray energy analysis using ratios of peak intensities has been adopted for the U.S. Geological Survey AEM in order to study the compositions of exsolved phases and changes in compositional profiles as a function of time and temperature. The quantitative analysis procedure involves 1) removal of instrument-induced background, 2) reduction of contamination, and 3) measurement of correction factors obtained from a wide range of standard compositions. The peak-ratio technique requires that the specimen thickness at the point of analysis be thin enough to make absorption corrections unnecessary (i.e., to satisfy the "thin-foil criteria"). In pyroxenes, the calculated "maximum thicknesses" range from 130 to 1400 nm for the ratios Mg/Si, Fe/Si, and Ca/Si; these "maximum thicknesses" have been contoured in pyroxene composition space as a guide during analysis. Analytical spatial resolutions of 50-100 nm have been achieved in AEM at 200 keV from the composition-profile studies, and analytical reproducibility in AEM from homogeneous pyroxene standards is ?? 1.5 mol% endmember. ?? 1982.

  7. An analytical SMASH procedure (ASP) for sensitivity-encoded MRI.

    PubMed

    Lee, R F; Westgate, C R; Weiss, R G; Bottomley, P A

    2000-05-01

    The simultaneous acquisition of spatial harmonics (SMASH) method of imaging with detector arrays can reduce the number of phase-encoding steps, and MRI scan time several-fold. The original approach utilized numerical gradient-descent fitting with the coil sensitivity profiles to create a set of composite spatial harmonics to replace the phase-encoding steps. Here, an analytical approach for generating the harmonics is presented. A transform is derived to project the harmonics onto a set of sensitivity profiles. A sequence of Fourier, Hilbert, and inverse Fourier transform is then applied to analytically eliminate spatially dependent phase errors from the different coils while fully preserving the spatial-encoding. By combining the transform and phase correction, the original numerical image reconstruction method can be replaced by an analytical SMASH procedure (ASP). The approach also allows simulation of SMASH imaging, revealing a criterion for the ratio of the detector sensitivity profile width to the detector spacing that produces optimal harmonic generation. When detector geometry is suboptimal, a group of quasi-harmonics arises, which can be corrected and restored to pure harmonics. The simulation also reveals high-order harmonic modulation effects, and a demodulation procedure is presented that enables application of ASP to a large numbers of detectors. The method is demonstrated on a phantom and humans using a standard 4-channel phased-array MRI system. Copyright 2000 Wiley-Liss, Inc.

  8. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    NASA Technical Reports Server (NTRS)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    Ocean color remote sensing provides synoptic-scale, near-daily observations of marine inherent optical properties (IOPs). Whilst contemporary ocean color algorithms are known to perform well in deep oceanic waters, they have difficulty operating in optically clear, shallow marine environments where light reflected from the seafloor contributes to the water-leaving radiance. The effect of benthic reflectance in optically shallow waters is known to adversely affect algorithms developed for optically deep waters [1, 2]. Whilst adapted versions of optically deep ocean color algorithms have been applied to optically shallow regions with reasonable success [3], there is presently no approach that directly corrects for bottom reflectance using existing knowledge of bathymetry and benthic albedo.To address the issue of optically shallow waters, we have developed a semi-analytical ocean color inversion algorithm: the Shallow Water Inversion Model (SWIM). SWIM uses existing bathymetry and a derived benthic albedo map to correct for bottom reflectance using the semi-analytical model of Lee et al [4]. The algorithm was incorporated into the NASA Ocean Biology Processing Groups L2GEN program and tested in optically shallow waters of the Great Barrier Reef, Australia. In-lieu of readily available in situ matchup data, we present a comparison between SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Property Algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA).

  9. Towards Secure and Trustworthy Cyberspace: Social Media Analytics on Hacker Communities

    ERIC Educational Resources Information Center

    Li, Weifeng

    2017-01-01

    Social media analytics is a critical research area spawned by the increasing availability of rich and abundant online user-generated content. So far, social media analytics has had a profound impact on organizational decision making in many aspects, including product and service design, market segmentation, customer relationship management, and…

  10. Spherical aberration correction with an in-lens N-fold symmetric line currents model.

    PubMed

    Hoque, Shahedul; Ito, Hiroyuki; Nishi, Ryuji

    2018-04-01

    In our previous works, we have proposed N-SYLC (N-fold symmetric line currents) models for aberration correction. In this paper, we propose "in-lens N-SYLC" model, where N-SYLC overlaps rotationally symmetric lens. Such overlap is possible because N-SYLC is free of magnetic materials. We analytically prove that, if certain parameters of the model are optimized, an in-lens 3-SYLC (N = 3) doublet can correct 3rd order spherical aberration. By computer simulation, we show that the required excitation current for correction is less than 0.25 AT for beam energy 5 keV, and the beam size after correction is smaller than 1 nm at the corrector image plane for initial slope less than 4 mrad. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. General Relativistic Precession in Small Solar System Bodies

    NASA Astrophysics Data System (ADS)

    Sekhar, Aswin; Werner, Stephanie; Hoffmann, Volker; Asher, David; Vaubaillon, Jeremie; Hajdukova, Maria; Li, Gongjie

    2016-10-01

    Introduction: One of the greatest successes of the Einstein's General Theory of Relativity (GR) was the correct prediction of the precession of perihelion of Mercury. The closed form expression to compute this precession tells us that substantial GR precession would occur only if the bodies have a combination of both moderately small perihelion distance and semi-major axis. Minimum Orbit Intersection Distance (MOID) is a quantity which helps us to understand the closest proximity of two orbits in space. Hence evaluating MOID is crucial to understand close encounters and collision scenarios better. In this work, we look at the possible scenarios where a small GR precession in argument of pericentre (ω) can create substantial changes in MOID for small bodies ranging from meteoroids to comets and asteroids.Analytical Approach and Numerical Integrations: Previous works have looked into neat analytical techniques to understand different collision scenarios and we use those standard expressions to compute MOID analytically. We find the nature of this mathematical function is such that a relatively small GR precession can lead to drastic changes in MOID values depending on the initial value of ω. Numerical integrations were done with package MERCURY incorporating the GR code to test the same effects. Numerical approach showed the same interesting relationship (as shown by analytical theory) between values of ω and the peaks/dips in MOID values. Previous works have shown that GR precession suppresses Kozai oscillations and this aspect was verified using our integrations. There is an overall agreement between both analytical and numerical methods.Summary and Discussion: We find that GR precession could play an important role in the calculations pertaining to MOID and close encounter scenarios in the case of certain small solar system bodies (depending on their initial orbital elements). Previous works have looked into impact probabilities and collision scenarios on planets from different small body populations. This work aims to find certain sub-sets of orbits where GR could play an interesting role. Certain parallels are drawn between the cases of asteroids, comets and small perihelion distance meteoroid streams.

  12. Characterizing Uncertainty In Electrical Resistivity Tomography Images Due To Subzero Temperature Variability

    NASA Astrophysics Data System (ADS)

    Herring, T.; Cey, E. E.; Pidlisecky, A.

    2017-12-01

    Time-lapse electrical resistivity tomography (ERT) is used to image changes in subsurface electrical conductivity (EC), e.g. due to a saline contaminant plume. Temperature variation also produces an EC response, which interferes with the signal of interest. Temperature compensation requires the temperature distribution and the relationship between EC and temperature, but this relationship at subzero temperatures is not well defined. The goal of this study is to examine how uncertainty in the subzero EC/temperature relationship manifests in temperature corrected ERT images, especially with respect to relevant plume parameters (location, contaminant mass, etc.). First, a lab experiment was performed to determine the EC of fine-grained glass beads over a range of temperatures (-20° to 20° C) and saturations. The measured EC/temperature relationship was then used to add temperature effects to a hypothetical EC model of a conductive plume. Forward simulations yielded synthetic field data to which temperature corrections were applied. Varying the temperature/EC relationship used in the temperature correction and comparing the temperature corrected ERT results to the synthetic model enabled a quantitative analysis of the error of plume parameters associated with temperature variability. Modeling possible scenarios in this way helps to establish the feasibility of different time-lapse ERT applications by quantifying the uncertainty associated with parameter(s) of interest.

  13. The relationship between urban forests and income: A meta-analysis.

    PubMed

    Gerrish, Ed; Watkins, Shannon Lea

    2018-02-01

    Urban trees provide substantial public health and public environmental benefits. However, scholarly works suggest that urban trees may be unequally distributed among poor and minority urban communities, meaning that these communities are potentially being deprived of public environmental benefits, a form of environmental injustice. The evidence of this problem is not uniform however, and evidence of inequity varies in size and significance across studies. This variation in results suggests the need for a research synthesis and meta-analysis. We employed a systematic literature search to identify original studies which examined the relationship between urban forest cover and income (n=61) and coded each effect size (n=332). We used meta-analytic techniques to estimate the average (unconditional) relationship between urban forest cover and income and to estimate the impact that methodological choices, measurement, publication characteristics, and study site characteristics had on the magnitude of that relationship. We leveraged variation in study methodology to evaluate the extent to which results were sensitive to methodological choices often debated in the geographic and environmental justice literature but not yet evaluated in environmental amenities research. We found evidence of income-based inequity in urban forest cover (unconditional mean effect size = 0.098; s.e. = .017) that was robust across most measurement and methodological strategies in original studies and results did not differ systematically with study site characteristics. Studies that controlled for spatial autocorrelation, a violation of independent errors, found evidence of substantially less urban forest inequity; future research in this area should test and correct for spatial autocorrelation.

  14. Investigation of the relationship between illogical thoughts and dependence on others and marriage compatibility in the Iranian Veterans exposed to chemicals in Iran-Iraq War.

    PubMed

    Afkar, A H; Mahbobubi, M; Neyakan Shahri, M; Mohammadi, M; Jalilian, F; Moradi, F

    2014-05-08

    Marital satisfaction is one of the main determinants of a family's correct function. A large number of veterans have been reported to suffer from depression, anxiety, mood disorders, post-traumatic stress disorder, and physical disorders. The objective of this study is to examine association between Illogical thoughts and Dependence on Others and Marriage Compatibility in the Iranian Veterans Exposed to Chemicals in Iran-Iraq War. The present cross-sectional, analytical study was conducted on 200 veterans exposed to chemicals who were covered by the Foundation of Martyrs and Veterans Affairs, Gilangharb, Kermanshah, Iran. The study sample size was determined according to Krejcie and Morgan formula and the subjects were selected through random sampling. The study data were collected using marriage compatibility questionnaire, illogical thoughts questionnaire, and dependence on others questionnaire. The study data were analyzed using the SPSS statistical software (version18). Pearson correlation coefficient, multiple regression, and t-test were used in order to determine the relationships among the variables and compare the means. The findings of the current study revealed no significant relationship between dependence on others, anxious attention, helplessness, avoiding problems, perfectionism, and autonomy and marriage compatibility. However, a significant relationship was found between failure and marriage compatibility. Overall, the findings of the present study showed that the veterans of Gilangharb did not have disorders, but depended on others, particularly their spouses, due to their abnormal physical status. Sometimes, they cannot even do their personal tasks which results in their dependence on others eventually putting the veterans under pressure and stress.

  15. Emergency Department Use in the US-Mexico Border Region and Violence in Mexico: Is There a Relationship?

    PubMed

    Geissler, Kimberley H; Holmes, George M

    2015-01-01

    This study assessed the association between homicide rates in northern Mexico and potentially avoidable use of emergency departments (ED) in the US-Mexico border region. The border region is largely rural and underserved, making the identification and correction of potential barriers to access crucial. We used secondary data from state inpatient and ED discharge databases for California and Arizona for 2005-2010. A retrospective observational analysis using generalized linear models was used to determine whether the probability that an ED encounter was potentially avoidable was associated with homicide rates in the nearest Mexican municipality. To conduct the analysis, the location of ED encounters were identified and matched with homicide rates in the nearest Mexican municipality and regional characteristics. The probability that an ED encounter was potentially avoidable was calculated using the Billings ED algorithm. We found that 77% of ED encounters were potentially avoidable, with a higher percentage in border counties. There was no statistically significant relationship between homicide rates and the probability that an ED encounter was for a potentially avoidable condition for the full analytic sample (n = 24,859,273) and the uninsured and underinsured in the sample (n = 11,700,123). A substantial majority of ED encounters in the US-Mexico border region were potentially avoidable. However, there was not a strong relationship between homicide rates in northern Mexico and the distribution of ED discharges in Arizona and California. Given the large percentage of potentially avoidable ED encounters and the ongoing violence in Mexico, continuing to monitor this relationship is important. © 2015 National Rural Health Association.

  16. Visuomotor Map Determines How Visually Guided Reaching Movements are Corrected Within and Across Trials123

    PubMed Central

    Hirashima, Masaya

    2016-01-01

    Abstract When a visually guided reaching movement is unexpectedly perturbed, it is implicitly corrected in two ways: immediately after the perturbation by feedback control (online correction) and in the next movement by adjusting feedforward motor commands (offline correction or motor adaptation). Although recent studies have revealed a close relationship between feedback and feedforward controls, the nature of this relationship is not yet fully understood. Here, we show that both implicit online and offline movement corrections utilize the same visuomotor map for feedforward movement control that transforms the spatial location of visual objects into appropriate motor commands. First, we artificially distorted the visuomotor map by applying opposite visual rotations to the cursor representing the hand position while human participants reached for two different targets. This procedure implicitly altered the visuomotor map so that changes in the movement direction to the target location were more insensitive or more sensitive. Then, we examined how such visuomotor map distortion influenced online movement correction by suddenly changing the target location. The magnitude of online movement correction was altered according to the shape of the visuomotor map. We also examined offline movement correction; the aftereffect induced by visual rotation in the previous trial was modulated according to the shape of the visuomotor map. These results highlighted the importance of the visuomotor map as a foundation for implicit motor control mechanisms and the intimate relationship between feedforward control, feedback control, and motor adaptation. PMID:27275006

  17. Visuomotor Map Determines How Visually Guided Reaching Movements are Corrected Within and Across Trials.

    PubMed

    Hayashi, Takuji; Yokoi, Atsushi; Hirashima, Masaya; Nozaki, Daichi

    2016-01-01

    When a visually guided reaching movement is unexpectedly perturbed, it is implicitly corrected in two ways: immediately after the perturbation by feedback control (online correction) and in the next movement by adjusting feedforward motor commands (offline correction or motor adaptation). Although recent studies have revealed a close relationship between feedback and feedforward controls, the nature of this relationship is not yet fully understood. Here, we show that both implicit online and offline movement corrections utilize the same visuomotor map for feedforward movement control that transforms the spatial location of visual objects into appropriate motor commands. First, we artificially distorted the visuomotor map by applying opposite visual rotations to the cursor representing the hand position while human participants reached for two different targets. This procedure implicitly altered the visuomotor map so that changes in the movement direction to the target location were more insensitive or more sensitive. Then, we examined how such visuomotor map distortion influenced online movement correction by suddenly changing the target location. The magnitude of online movement correction was altered according to the shape of the visuomotor map. We also examined offline movement correction; the aftereffect induced by visual rotation in the previous trial was modulated according to the shape of the visuomotor map. These results highlighted the importance of the visuomotor map as a foundation for implicit motor control mechanisms and the intimate relationship between feedforward control, feedback control, and motor adaptation.

  18. "Ser" and "Estar": Corrective Input to Children's Errors of the Spanish Copula Verbs

    ERIC Educational Resources Information Center

    Holtheuer, Carolina; Rendle-Short, Johanna

    2013-01-01

    Evidence for the role of corrective input as a facilitator of language acquisition is inconclusive. Studies show links between corrective input and grammatical use of some, but not other, language structures. The present study examined relationships between corrective parental input and children's errors in the acquisition of the Spanish copula…

  19. Enhancing Correctional Education through Community Theatre: The Benin Prison Experience

    ERIC Educational Resources Information Center

    Okhakhu, Marcel; Evawoma-Enuku, Usiwoma

    2011-01-01

    This paper seeks to establish the relationship between Popular Theatre and Correctional Education. The Benin Prison experiment is the springboard for this laudable and valuable link. The paper strives stridently to show the value of Popular Theatre as a vehicle for achieving correctional values in a Correction centre. More than anything else, it…

  20. Some comments on mapping from disease-specific to generic health-related quality-of-life scales.

    PubMed

    Palta, Mari

    2013-01-01

    An article by Lu et al. in this issue of Value in Health addresses the mapping of treatment or group differences in disease-specific measures (DSMs) of health-related quality of life onto differences in generic health-related quality-of-life scores, with special emphasis on how the mapping is affected by the reliability of the DSM. In the proposed mapping, a factor analytic model defines a conversion factor between the scores as the ratio of factor loadings. Hence, the mapping applies to convert true underlying scales and has desirable properties facilitating the alignment of instruments and understanding their relationship in a coherent manner. It is important to note, however, that when DSM means or differences in mean DSMs are estimated, their mapping is still of a measurement error-prone predictor, and the correct conversion coefficient is the true mapping multiplied by the reliability of the DSM in the relevant sample. In addition, the proposed strategy for estimating the factor analytic mapping in practice requires assumptions that may not hold. We discuss these assumptions and how they may be the reason we obtain disparate estimates of the mapping factor in an application of the proposed methods to groups of patients. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. Estimating state of charge and health of lithium-ion batteries with guided waves using built-in piezoelectric sensors/actuators

    NASA Astrophysics Data System (ADS)

    Ladpli, Purim; Kopsaftopoulos, Fotis; Chang, Fu-Kuo

    2018-04-01

    This work presents the feasibility of monitoring state of charge (SoC) and state of health (SoH) of lithium-ion pouch batteries with acousto-ultrasonic guided waves. The guided waves are propagated and sensed using low-profile, built-in piezoelectric disc transducers that can be retrofitted onto off-the-shelf batteries. Both experimental and analytical studies are performed to understand the relationship between guided waves generated in a pitch-catch mode and battery SoC/SoH. The preliminary experiments on representative pouch cells show that the changes in time of flight (ToF) and signal amplitude (SA) resulting from shifts in the guided wave signals correlate strongly with the electrochemical charge-discharge cycling and aging. An analytical acoustic model is developed to simulate the variations in electrode moduli and densities during cycling, which correctly validates the absolute values and range of experimental ToF. It is further illustrated via a statistical study that ToF and SA can be used in a prediction model to accurately estimate SoC/SoH. Additionally, by using multiple sensors in a network configuration on the same battery, a significantly more reliable and accurate SoC/SoH prediction is achieved. The indicative results from this study can be extended to develop a unified guided-wave-based framework for SoC/SoH monitoring of many lithium-ion battery applications.

  2. Measuring myokines with cardiovascular functions: pre-analytical variables affecting the analytical output.

    PubMed

    Lombardi, Giovanni; Sansoni, Veronica; Banfi, Giuseppe

    2017-08-01

    In the last few years, a growing number of molecules have been associated to an endocrine function of the skeletal muscle. Circulating myokine levels, in turn, have been associated with several pathophysiological conditions including the cardiovascular ones. However, data from different studies are often not completely comparable or even discordant. This would be due, at least in part, to the whole set of situations related to the preparation of the patient prior to blood sampling, blood sampling procedure, processing and/or store. This entire process constitutes the pre-analytical phase. The importance of the pre-analytical phase is often not considered. However, in routine diagnostics, the 70% of the errors are in this phase. Moreover, errors during the pre-analytical phase are carried over in the analytical phase and affects the final output. In research, for example, when samples are collected over a long time and by different laboratories, a standardized procedure for sample collecting and the correct procedure for sample storage are acknowledged. In this review, we discuss the pre-analytical variables potentially affecting the measurement of myokines with cardiovascular functions.

  3. On μe-scattering at NNLO in QED

    NASA Astrophysics Data System (ADS)

    Mastrolia, P.; Passera, M.; Primo, A.; Schubert, U.; Torres Bobadilla, W. J.

    2018-05-01

    We report on the current status of the analytic evaluation of the two-loop corrections to the μescattering in Quantum Electrodynamics, presenting state-of-the art techniques which have been developed to address this challenging task.

  4. Evaluation of Analytical Modeling Functions for the Phonation Onset Process.

    PubMed

    Petermann, Simon; Kniesburges, Stefan; Ziethe, Anke; Schützenberger, Anne; Döllinger, Michael

    2016-01-01

    The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO), called the voice onset time (VOT), is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps) was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW) during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1) reliability of the fit function for a correct approximation of VO; (2) consistency represented by the standard deviation of VOT; and (3) accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW.

  5. Advances in data processing for open-path Fourier transform infrared spectrometry of greenhouse gases.

    PubMed

    Shao, Limin; Griffiths, Peter R; Leytem, April B

    2010-10-01

    The automated quantification of three greenhouse gases, ammonia, methane, and nitrous oxide, in the vicinity of a large dairy farm by open-path Fourier transform infrared (OP/FT-IR) spectrometry at intervals of 5 min is demonstrated. Spectral pretreatment, including the automated detection and correction of the effect of interrupting the infrared beam, is by a moving object, and the automated correction for the nonlinear detector response is applied to the measured interferograms. Two ways of obtaining quantitative data from OP/FT-IR data are described. The first, which is installed in a recently acquired commercial OP/FT-IR spectrometer, is based on classical least-squares (CLS) regression, and the second is based on partial least-squares (PLS) regression. It is shown that CLS regression only gives accurate results if the absorption features of the analytes are located in very short spectral intervals where lines due to atmospheric water vapor are absent or very weak; of the three analytes examined, only ammonia fell into this category. On the other hand, PLS regression works allowed what appeared to be accurate results to be obtained for all three analytes.

  6. QCD corrections to decay-lepton polar and azimuthal angular distributions in e+e- --> tt(bar) in the soft-gluon approximation

    NASA Astrophysics Data System (ADS)

    Rindani, Saurabh D.

    2002-04-01

    QCD corrections to order as in the soft-gluon approximation to angular distributions of decay charged leptons in the process e+e- --> t t(bar), followed by semileptonic decay of t or t(bar), are obtained in the e+e- centre-of-mass frame. As compared to distributions in the top rest frame, these have the advantage that they would allow direct comparison with experiment without the need to reconstruct the top rest frame. The results also do not depend on the choice of a spin quantization axis for t or t (bar). Analytic expression for the triple distribution in the polar angle of t and polar and azimuthal angles of the lepton is obtained. Analytic expression is also derived for the distribution in the charged-lepton polar angle. Numerical values are discussed for (s) 1/2 = 400, 800 and 1500 GeV.

  7. An Improved Method of AGM for High Precision Geolocation of SAR Images

    NASA Astrophysics Data System (ADS)

    Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.

    2018-05-01

    In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.

  8. Simultaneous quantitative analysis of olmesartan, amlodipine and hydrochlorothiazide in their combined dosage form utilizing classical and alternating least squares based chemometric methods.

    PubMed

    Darwish, Hany W; Bakheit, Ahmed H; Abdelhameed, Ali S

    2016-03-01

    Simultaneous spectrophotometric analysis of a multi-component dosage form of olmesartan, amlodipine and hydrochlorothiazide used for the treatment of hypertension has been carried out using various chemometric methods. Multivariate calibration methods include classical least squares (CLS) executed by net analyte processing (NAP-CLS), orthogonal signal correction (OSC-CLS) and direct orthogonal signal correction (DOSC-CLS) in addition to multivariate curve resolution-alternating least squares (MCR-ALS). Results demonstrated the efficiency of the proposed methods as quantitative tools of analysis as well as their qualitative capability. The three analytes were determined precisely using the aforementioned methods in an external data set and in a dosage form after optimization of experimental conditions. Finally, the efficiency of the models was validated via comparison with the partial least squares (PLS) method in terms of accuracy and precision.

  9. Comparison of a two-dimensional adaptive-wall technique with analytical wall interference correction techniques

    NASA Technical Reports Server (NTRS)

    Mineck, Raymond E.

    1992-01-01

    A two dimensional airfoil model was tested in the adaptive wall test section of the NASA Langley 0.3 meter Transonic Cryogenic Tunnel (TCT) and in the ventilated test section of the National Aeronautical Establishment Two Dimensional High Reynold Number Facility (HRNF). The primary goal of the tests was to compare different techniques (adaptive test section walls and classical, analytical corrections) to account for wall interference. Tests were conducted over a Mach number range from 0.3 to 0.8 at chord Reynolds numbers of 10 x 10(exp 6), 15 x 10(exp 6), and 20 x 10(exp 6). The angle of attack was varied from about 12 degrees up to stall. Movement of the top and bottom test section walls was used to account for the wall interference in the HRNF tests. The test results are in good agreement.

  10. A Method for Calculating Viscosity and Thermal Conductivity of a Helium-Xenon Gas Mixture

    NASA Technical Reports Server (NTRS)

    Johnson, Paul K.

    2006-01-01

    A method for calculating viscosity and thermal conductivity of a helium-xenon (He-Xe) gas mixture was employed, and results were compared to AiResearch (part of Honeywell) analytical data. The method of choice was that presented by Hirschfelder with Singh's third-order correction factor applied to thermal conductivity. Values for viscosity and thermal conductivity were calculated over a temperature range of 400 to 1200 K for He-Xe gas mixture molecular weights of 20.183, 39.94, and 83.8 kg/kmol. First-order values for both transport properties were in good agreement with AiResearch analytical data. Third-order-corrected thermal conductivity values were all greater than AiResearch data, but were considered to be a better approximation of thermal conductivity because higher-order effects of mass and temperature were taken into consideration. Viscosity, conductivity, and Prandtl number were then compared to experimental data presented by Taylor.

  11. Analysis of modal behavior at frequency cross-over

    NASA Astrophysics Data System (ADS)

    Costa, Robert N., Jr.

    1994-11-01

    The existence of the mode crossing condition is detected and analyzed in the Active Control of Space Structures Model 4 (ACOSS4). The condition is studied for its contribution to the inability of previous algorithms to successfully optimize the structure and converge to a feasible solution. A new algorithm is developed to detect and correct for mode crossings. The existence of the mode crossing condition is verified in ACOSS4 and found not to have appreciably affected the solution. The structure is then successfully optimized using new analytic methods based on modal expansion. An unrelated error in the optimization algorithm previously used is verified and corrected, thereby equipping the optimization algorithm with a second analytic method for eigenvector differentiation based on Nelson's Method. The second structure is the Control of Flexible Structures (COFS). The COFS structure is successfully reproduced and an initial eigenanalysis completed.

  12. Fully synchronous solutions and the synchronization phase transition for the finite-N Kuramoto model

    NASA Astrophysics Data System (ADS)

    Bronski, Jared C.; DeVille, Lee; Jip Park, Moon

    2012-09-01

    We present a detailed analysis of the stability of phase-locked solutions to the Kuramoto system of oscillators. We derive an analytical expression counting the dimension of the unstable manifold associated to a given stationary solution. From this we are able to derive a number of consequences, including analytic expressions for the first and last frequency vectors to phase-lock, upper and lower bounds on the probability that a randomly chosen frequency vector will phase-lock, and very sharp results on the large N limit of this model. One of the surprises in this calculation is that for frequencies that are Gaussian distributed, the correct scaling for full synchrony is not the one commonly studied in the literature; rather, there is a logarithmic correction to the scaling which is related to the extremal value statistics of the random frequency vector.

  13. Using Maps in Web Analytics to Evaluate the Impact of Web-Based Extension Programs

    ERIC Educational Resources Information Center

    Veregin, Howard

    2015-01-01

    Maps can be a valuable addition to the Web analytics toolbox for Extension programs that use the Web to disseminate information. Extension professionals use Web analytics tools to evaluate program impacts. Maps add a unique perspective through visualization and analysis of geographic patterns and their relationships to other variables. Maps can…

  14. Slurry sampling high-resolution continuum source electrothermal atomic absorption spectrometry for direct beryllium determination in soil and sediment samples after elimination of SiO interference by least-squares background correction.

    PubMed

    Husáková, Lenka; Urbanová, Iva; Šafránková, Michaela; Šídová, Tereza

    2017-12-01

    In this work a simple, efficient, and environmentally-friendly method is proposed for determination of Be in soil and sediment samples employing slurry sampling and high-resolution continuum source electrothermal atomic absorption spectrometry (HR-CS-ETAAS). The spectral effects originating from SiO species were identified and successfully corrected by means of a mathematical correction algorithm. Fractional factorial design has been employed to assess the parameters affecting the analytical results and especially to help in the development of the slurry preparation and optimization of measuring conditions. The effects of seven analytical variables including particle size, concentration of glycerol and HNO 3 for stabilization and analyte extraction, respectively, the effect of ultrasonic agitation for slurry homogenization, concentration of chemical modifier, pyrolysis and atomization temperature were investigated by a 2 7-3 replicate (n = 3) design. Using the optimized experimental conditions, the proposed method allowed the determination of Be with a detection limit being 0.016mgkg -1 and characteristic mass 1.3pg. Optimum results were obtained after preparing the slurries by weighing 100mg of a sample with particle size < 54µm and adding 25mL of 20% w/w glycerol. The use of 1μg Rh and 50μg citric acid was found satisfactory for the analyte stabilization. Accurate data were obtained with the use of matrix-free calibration. The accuracy of the method was confirmed by analysis of two certified reference materials (NIST SRM 2702 Inorganics in Marine Sediment and IGI BIL-1 Baikal Bottom Silt) and by comparison of the results obtained for ten real samples by slurry sampling with those determined after microwave-assisted extraction by inductively coupled plasma time of flight mass spectrometry (TOF-ICP-MS). The reported method has a precision better than 7%. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Analytical techniques for retrieval of atmospheric composition with the quadrupole mass spectrometer of the Sample Analysis at Mars instrument suite on Mars Science Laboratory

    NASA Astrophysics Data System (ADS)

    B. Franz, Heather; G. Trainer, Melissa; H. Wong, Michael; L. K. Manning, Heidi; C. Stern, Jennifer; R. Mahaffy, Paul; K. Atreya, Sushil; Benna, Mehdi; G. Conrad, Pamela; N. Harpold, Dan; A. Leshin, Laurie; A. Malespin, Charles; P. McKay, Christopher; Thomas Nolan, J.; Raaen, Eric

    2014-06-01

    The Sample Analysis at Mars (SAM) instrument suite is the largest scientific payload on the Mars Science Laboratory (MSL) Curiosity rover, which landed in Mars' Gale Crater in August 2012. As a miniature geochemical laboratory, SAM is well-equipped to address multiple aspects of MSL's primary science goal, characterizing the potential past or present habitability of Gale Crater. Atmospheric measurements support this goal through compositional investigations relevant to martian climate evolution. SAM instruments include a quadrupole mass spectrometer, a tunable laser spectrometer, and a gas chromatograph that are used to analyze martian atmospheric gases as well as volatiles released by pyrolysis of solid surface materials (Mahaffy et al., 2012). This report presents analytical methods for retrieving the chemical and isotopic composition of Mars' atmosphere from measurements obtained with SAM's quadrupole mass spectrometer. It provides empirical calibration constants for computing volume mixing ratios of the most abundant atmospheric species and analytical functions to correct for instrument artifacts and to characterize measurement uncertainties. Finally, we discuss differences in volume mixing ratios of the martian atmosphere as determined by SAM (Mahaffy et al., 2013) and Viking (Owen et al., 1977; Oyama and Berdahl, 1977) from an analytical perspective. Although the focus of this paper is atmospheric observations, much of the material concerning corrections for instrumental effects also applies to reduction of data acquired with SAM from analysis of solid samples. The Sample Analysis at Mars (SAM) instrument measures the composition of the martian atmosphere. Rigorous calibration of SAM's mass spectrometer was performed with relevant gas mixtures. Calibration included derivation of a new model to correct for electron multiplier effects. Volume mixing ratios for Ar and N2 obtained with SAM differ from those obtained with Viking. Differences between SAM and Viking volume mixing ratios are under investigation.

  16. Growing geometric reasoning in solving problems of analytical geometry through the mathematical communication problems to state Islamic university students

    NASA Astrophysics Data System (ADS)

    Mujiasih; Waluya, S. B.; Kartono; Mariani

    2018-03-01

    Skills in working on the geometry problems great needs of the competence of Geometric Reasoning. As a teacher candidate, State Islamic University (UIN) students need to have the competence of this Geometric Reasoning. When the geometric reasoning in solving of geometry problems has grown well, it is expected the students are able to write their ideas to be communicative for the reader. The ability of a student's mathematical communication is supposed to be used as a marker of the growth of their Geometric Reasoning. Thus, the search for the growth of geometric reasoning in solving of analytic geometry problems will be characterized by the growth of mathematical communication abilities whose work is complete, correct and sequential, especially in writing. Preceded with qualitative research, this article was the result of a study that explores the problem: Was the search for the growth of geometric reasoning in solving analytic geometry problems could be characterized by the growth of mathematical communication abilities? The main activities in this research were done through a series of activities: (1) Lecturer trains the students to work on analytic geometry problems that were not routine and algorithmic process but many problems that the process requires high reasoning and divergent/open ended. (2) Students were asked to do the problems independently, in detail, complete, order, and correct. (3) Student answers were then corrected each its stage. (4) Then taken 6 students as the subject of this research. (5) Research subjects were interviewed and researchers conducted triangulation. The results of this research, (1) Mathematics Education student of UIN Semarang, had adequate the mathematical communication ability, (2) the ability of this mathematical communication, could be a marker of the geometric reasoning in solving of problems, and (3) the geometric reasoning of UIN students had grown in a category that tends to be good.

  17. Accurate anharmonic zero-point energies for some combustion-related species from diffusion Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, Lawrence B.; Georgievskii, Yuri; Klippenstein, Stephen J.

    Full dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic, zero point energies. Here, the resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower level electronic structure methods (B3LYP and MP2).

  18. Accurate Anharmonic Zero-Point Energies for Some Combustion-Related Species from Diffusion Monte Carlo.

    PubMed

    Harding, Lawrence B; Georgievskii, Yuri; Klippenstein, Stephen J

    2017-06-08

    Full-dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion-related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic zero-point energies. The resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower-level electronic structure methods (B3LYP and MP2).

  19. Elegant Ince—Gaussian breathers in strongly nonlocal nonlinear media

    NASA Astrophysics Data System (ADS)

    Bai, Zhi-Yong; Deng, Dong-Mei; Guo, Qi

    2012-06-01

    A novel class of optical breathers, called elegant Ince—Gaussian breathers, are presented in this paper. They are exact analytical solutions to Snyder and Mitchell's mode in an elliptic coordinate system, and their transverse structures are described by Ince-polynomials with complex arguments and a Gaussian function. We provide convincing evidence for the correctness of the solutions and the existence of the breathers via comparing the analytical solutions with numerical simulation of the nonlocal nonlinear Schrödinger equation.

  20. Accurate anharmonic zero-point energies for some combustion-related species from diffusion Monte Carlo

    DOE PAGES

    Harding, Lawrence B.; Georgievskii, Yuri; Klippenstein, Stephen J.

    2017-05-17

    Full dimensional analytic potential energy surfaces based on CCSD(T)/cc-pVTZ calculations have been determined for 48 small combustion related molecules. The analytic surfaces have been used in Diffusion Monte Carlo calculations of the anharmonic, zero point energies. Here, the resulting anharmonicity corrections are compared to vibrational perturbation theory results based both on the same level of electronic structure theory and on lower level electronic structure methods (B3LYP and MP2).

  1. Generalized analytical solutions to multispecies transport equations with scale-dependent dispersion coefficients subject to time-dependent boundary conditions

    NASA Astrophysics Data System (ADS)

    Chen, J. S.; Chiang, S. Y.; Liang, C. P.

    2017-12-01

    It is essential to develop multispecies transport analytical models based on a set of advection-dispersion equations (ADEs) coupled with sequential first-order decay reactions for the synchronous prediction of plume migrations of both parent and its daughter species of decaying contaminants such as radionuclides, dissolved chlorinated organic compounds, pesticides and nitrogen. Although several analytical models for multispecies transport have already been reported, those currently available in the literature have primarily been derived based on ADEs with constant dispersion coefficients. However, there have been a number of studies demonstrating that the dispersion coefficients increase with the solute travel distance as a consequence of variation in the hydraulic properties of the porous media. This study presents novel analytical models for multispecies transport with distance-dependent dispersion coefficients. The correctness of the derived analytical models is confirmed by comparing them against the numerical models. Results show perfect agreement between the analytical and numerical models. Comparison of our new analytical model for multispecies transport with scale-dependent dispersion to an analytical model with constant dispersion is made to illustrate the effects of the dispersion coefficients on the multispecies transport of decaying contaminants.

  2. [The subject matters concerned with use of simplified analytical systems from the perspective of the Japanese Association of Medical Technologists].

    PubMed

    Morishita, Y

    2001-05-01

    The subject matters concerned with use of so-called simplified analytical systems for the purpose of useful utilizing are mentioned from the perspective of a laboratory technician. 1. The data from simplified analytical systems should to be agreed with those of particular reference methods not to occur the discrepancy of the data from different laboratories. 2. Accuracy of the measured results using simplified analytical systems is hard to be scrutinized thoroughly and correctly with the quality control surveillance procedure on the stored pooled serum or partly-processed blood. 3. It is necessary to present the guide line to follow about the contents of evaluation to guarantee on quality of simplified analytical systems. 4. Maintenance and manual performance of simplified analytical systems have to be standardized by a laboratory technician and a selling agent technician. 5. It calls attention, further that the cost of simplified analytical systems is much expensive compared to that of routine method with liquid reagents. 6. Various substances in human serum, like cytokine, hormone, tumor marker, and vitamin, etc. are also hoped to be measured by simplified analytical systems.

  3. SU-F-BRE-14: Uncertainty Analysis for Dose Measurements Using OSLD NanoDots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kry, S; Alvarez, P; Stingo, F

    2014-06-15

    Purpose: Optically stimulated luminescent dosimeters (OSLD) are an increasingly popular dosimeter for research and clinical applications. It is also used by the Radiological Physics Center for remote auditing of machine output. In this work we robustly calculated the reproducibility and uncertainty of the OSLD nanoDot. Methods: For the RPC dose calculation, raw readings are corrected for depletion, element sensitivity, fading, linearity, and energy. System calibration is determined for the experimental OSLD irradiated at different institutions by using OSLD irradiated by the RPC under reference conditions (i.e., standards): 1 Gy in a Cobalt beam. The intra-dot and inter-dot reproducibilities (coefficient ofmore » variation) were determined from the history of RPC readings of these standards. The standard deviation of the corrected OSLD signal was then calculated analytically using a recursive formalism that did not rely on the normality assumption of the underlying uncertainties, or on any type of mathematical approximation. This analytical uncertainty was compared to that empirically estimated from >45,000 RPC beam audits. Results: The intra-dot variability was found to be 0.59%, with only a small variation between readers. Inter-dot variability was found to be 0.85%. The uncertainty in each of the individual correction factors was empirically determined. When the raw counts from each OSLD were adjusted for the appropriate correction factors, the analytically determined coefficient of variation was 1.8% over a range of institutional irradiation conditions that are seen at the RPC. This is reasonably consistent with the empirical observations of the RPC, where the coefficient of variation of the measured beam outputs is 1.6% (photons) and 1.9% (electrons). Conclusion: OSLD nanoDots provide sufficiently good precision for a wide range of applications, including the RPC remote monitoring program for megavoltage beams. This work was supported by PHS grant CA10953 awarded by the NIH (DHHS)« less

  4. An efficient approach for treating composition-dependent diffusion within organic particles

    DOE PAGES

    O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.; ...

    2017-09-07

    Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less

  5. An efficient approach for treating composition-dependent diffusion within organic particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Meara, Simon; Topping, David O.; Zaveri, Rahul A.

    Mounting evidence demonstrates that under certain conditions the rate of component partitioning between the gas and particle phase in atmospheric organic aerosol is limited by particle-phase diffusion. To date, however, particle-phase diffusion has not been incorporated into regional atmospheric models. An analytical rather than numerical solution to diffusion through organic particulate matter is desirable because of its comparatively small computational expense in regional models. Current analytical models assume diffusion to be independent of composition and therefore use a constant diffusion coefficient. To realistically model diffusion, however, it should be composition-dependent (e.g. due to the partitioning of components that plasticise, vitrifymore » or solidify). This study assesses the modelling capability of an analytical solution to diffusion corrected to account for composition dependence against a numerical solution. Results show reasonable agreement when the gas-phase saturation ratio of a partitioning component is constant and particle-phase diffusion limits partitioning rate (<10% discrepancy in estimated radius change). However, when the saturation ratio of the partitioning component varies, a generally applicable correction cannot be found, indicating that existing methodologies are incapable of deriving a general solution. Until such time as a general solution is found, caution should be given to sensitivity studies that assume constant diffusivity. Furthermore, the correction was implemented in the polydisperse, multi-process Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) and is used to illustrate how the evolution of number size distribution may be accelerated by condensation of a plasticising component onto viscous organic particles.« less

  6. An empirical, graphical, and analytical study of the relationship between vegetation indices. [derived from LANDSAT data

    NASA Technical Reports Server (NTRS)

    Lautenschlager, L.; Perry, C. R., Jr. (Principal Investigator)

    1981-01-01

    The development of formulae for the reduction of multispectral scanner measurements to a single value (vegetation index) for predicting and assessing vegetative characteristics is addressed. The origin, motivation, and derivation of some four dozen vegetation indices are summarized. Empirical, graphical, and analytical techniques are used to investigate the relationships among the various indices. It is concluded that many vegetative indices are very similar, some being simple algebraic transforms of others.

  7. A Relationship Between the 2-body Energy of Kaxiras Pandey and Pearson Takai Halicioglu Tiller Potential Functions

    NASA Astrophysics Data System (ADS)

    Lim, Teik-Cheng

    2004-01-01

    A parametric relationship between the Pearson Takai Halicioglu Tiller (PTHT) and the Kaxiras Pandey (KP) empirical potential energy functions is developed for the case of 2-body interaction. The need for such relationship arises when preferred parametric data and adopted software correspond to different potential functions. The analytical relationship was obtained by equating the potential functions' derivatives at zeroth, first and second order with respect to the interatomic distance at the equilibrium bond length, followed by comparison of coefficients in the repulsive and attractive terms. Plots of non-dimensional 2-body energy versus the nondimensional interatomic distance verified the analytical relationships developed herein. The discrepancy revealed in theoretical plots suggests that the 2-body PTHT and KP potentials are more suitable for curve-fitting "softer" and "harder" bonds respectively.

  8. A combinatorial perspective of the protein inference problem.

    PubMed

    Yang, Chao; He, Zengyou; Yu, Weichuan

    2013-01-01

    In a shotgun proteomics experiment, proteins are the most biologically meaningful output. The success of proteomics studies depends on the ability to accurately and efficiently identify proteins. Many methods have been proposed to facilitate the identification of proteins from peptide identification results. However, the relationship between protein identification and peptide identification has not been thoroughly explained before. In this paper, we devote ourselves to a combinatorial perspective of the protein inference problem. We employ combinatorial mathematics to calculate the conditional protein probabilities (protein probability means the probability that a protein is correctly identified) under three assumptions, which lead to a lower bound, an upper bound, and an empirical estimation of protein probabilities, respectively. The combinatorial perspective enables us to obtain an analytical expression for protein inference. Our method achieves comparable results with ProteinProphet in a more efficient manner in experiments on two data sets of standard protein mixtures and two data sets of real samples. Based on our model, we study the impact of unique peptides and degenerate peptides (degenerate peptides are peptides shared by at least two proteins) on protein probabilities. Meanwhile, we also study the relationship between our model and ProteinProphet. We name our program ProteinInfer. Its Java source code, our supplementary document and experimental results are available at: >http://bioinformatics.ust.hk/proteininfer.

  9. Path durations for use in the stochastic‐method simulation of ground motions

    USGS Publications Warehouse

    Boore, David M.; Thompson, Eric M.

    2014-01-01

    The stochastic method of ground‐motion simulation assumes that the energy in a target spectrum is spread over a duration DT. DT is generally decomposed into the duration due to source effects (DS) and to path effects (DP). For the most commonly used source, seismological theory directly relates DS to the source corner frequency, accounting for the magnitude scaling of DT. In contrast, DP is related to propagation effects that are more difficult to represent by analytic equations based on the physics of the process. We are primarily motivated to revisit DT because the function currently employed by many implementations of the stochastic method for active tectonic regions underpredicts observed durations, leading to an overprediction of ground motions for a given target spectrum. Further, there is some inconsistency in the literature regarding which empirical duration corresponds to DT. Thus, we begin by clarifying the relationship between empirical durations and DT as used in the first author’s implementation of the stochastic method, and then we develop a new DP relationship. The new DP function gives significantly longer durations than in the previous DP function, but the relative contribution of DP to DT still diminishes with increasing magnitude. Thus, this correction is more important for small events or subfaults of larger events modeled with the stochastic finite‐fault method.

  10. Eliciting the Functional Processes of Apologizing for Errors in Health Care: Developing an Explanatory Model of Apology.

    PubMed

    Prothero, Marie M; Morse, Janice M

    2017-01-01

    The purpose of this article was to analyze the concept development of apology in the context of errors in health care, the administrative response, policy and format/process of the subsequent apology. Using pragmatic utility and a systematic review of the literature, 29 articles and one book provided attributes involved in apologizing. Analytic questions were developed to guide the data synthesis and types of apologies used in different circumstances identified. The antecedents of apologizing, and the attributes and outcomes were identified. A model was constructed illustrating the components of a complete apology, other types of apologies, and ramifications/outcomes of each. Clinical implications of developing formal policies for correcting medical errors through apologies are recommended. Defining the essential elements of apology is the first step in establishing a just culture in health care. Respect for patient-centered care reduces the retaliate consequences following an error, and may even restore the physician patient relationship.

  11. Nanodomain induced anomalous magnetic and electronic transport properties of LaBaCo{sub 2}O{sub 5.5+δ} highly epitaxial thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz-Zepeda, F.; Ma, C.; Bahena Uribe, D.

    2014-01-14

    A giant magnetoresistance effect (∼46% at 20 K under 7 T) and anomalous magnetic properties were found in a highly epitaxial double perovskite LaBaCo{sub 2}O{sub 5.5+δ} (LBCO) thin film on (001) MgO. Aberration-corrected Electron Microscopy and related analytical techniques were employed to understand the nature of these unusual physical properties. The as-grown film is epitaxial with the c-axis of the LBCO structure lying in the film plane and with an interface relationship given by (100){sub LBCO} || (001){sub MgO} and [001]{sub LBCO} || [100]{sub MgO} or [010]{sub MgO}. Orderly oxygen vacancies were observed by line profile electron energy loss spectroscopy and bymore » atomic resolution imaging. Especially, oxygen vacancy and nanodomain structures were found to have a crucial effect on the electronic transport and magnetic properties.« less

  12. Smoothed dissipative particle dynamics model for mesoscopic multiphase flows in the presence of thermal fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huan; Baker, Nathan A.; Wu, Lei

    2016-08-05

    Thermal fluctuations cause perturbations of fluid-fluid interfaces and highly nonlinear hydrodynamics in multiphase flows. In this work, we develop a novel multiphase smoothed dissipative particle dynamics model. This model accounts for both bulk hydrodynamics and interfacial fluctuations. Interfacial surface tension is modeled by imposing a pairwise force between SDPD particles. We show that the relationship between the model parameters and surface tension, previously derived under the assumption of zero thermal fluctuation, is accurate for fluid systems at low temperature but overestimates the surface tension for intermediate and large thermal fluctuations. To analyze the effect of thermal fluctuations on surface tension,more » we construct a coarse-grained Euler lattice model based on the mean field theory and derive a semi-analytical formula to directly relate the surface tension to model parameters for a wide range of temperatures and model resolutions. We demonstrate that the present method correctly models the dynamic processes, such as bubble coalescence and capillary spectra across the interface.« less

  13. An Overview of Learning Analytics

    ERIC Educational Resources Information Center

    Zilvinskis, John; Willis, James, III; Borden, Victor M. H.

    2017-01-01

    The purpose of this chapter is to provide administrators and faculty with an understanding of learning analytics and its relationship to existing roles and functions so better institutional decisions can be made about investments and activities related to these technologies.

  14. Clustering of galaxies with f(R) gravity

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; Faizal, Mir; Hameeda, Mir; Pourhassan, Behnam; Salzano, Vincenzo; Upadhyay, Sudhaker

    2018-02-01

    Based on thermodynamics, we discuss the galactic clustering of expanding Universe by assuming the gravitational interaction through the modified Newton's potential given by f(R) gravity. We compute the corrected N-particle partition function analytically. The corrected partition function leads to more exact equations of state of the system. By assuming that the system follows quasi-equilibrium, we derive the exact distribution function that exhibits the f(R) correction. Moreover, we evaluate the critical temperature and discuss the stability of the system. We observe the effects of correction of f(R) gravity on the power-law behaviour of particle-particle correlation function also. In order to check the feasibility of an f(R) gravity approach to the clustering of galaxies, we compare our results with an observational galaxy cluster catalogue.

  15. A mass-balanced definition of corrected retention volume in gas chromatography.

    PubMed

    Kurganov, A

    2007-05-25

    The mass balance equation of a chromatographic system using a compressible moving phase has been compiled for mass flow of the mobile phase instead of traditional volumetric flow allowing solution of the equation in an analytical form. The relation obtained correlates retention volume measured under ambient conditions with the partition coefficient of the solute. Compared to the relation in the ideal chromatographic system the equation derived contains an additional correction term accounting for the compressibility of the moving phase. When the retention volume is measured under the mean column pressure and column temperature the correction term is reduced to unit and the relation is simplified to those known for the ideal system. This volume according to International Union of Pure and Applied Chemistry (IUPAC) is called the corrected retention volume.

  16. ENVIRONMENTAL ANALYTICAL CHEMISTRY OF ...

    EPA Pesticide Factsheets

    Within the scope of a number of emerging contaminant issues in environmental analysis, one area that has received a great deal of public interest has been the assessment of the role of pharmaceuticals and personal care products (PPCPs) as stressors and agents of change in ecosystems as well as their role in unplanned human exposure. The relationship between personal actions and the occurrence of PPCPs in the environment is clear-cut and comprehensible to the public. In this overview, we attempt to examine the separations aspect of the analytical approach to the vast array of potential analytes among this class of compounds. We also highlight the relationship between these compounds and endocrine disrupting compounds (EDCs) and between PPCPs and EDCs and the more traditional environmental analytes such as the persistent organic pollutants (POPs). Although the spectrum of chemical behavior extends from hydrophobic to hydrophilic, the current focus has shifted to moderately and highly polar analytes. Thus, emphasis on HPLC and LC/MS has grown and MS/MS has become a detection technique of choice with either electrospray ionization or atmospheric pressure chemical ionization. This contrasts markedly with the bench mark approach of capillary GC, GC/MS and electron ionization in traditional environmental analysis. The expansion of the analyte list has fostered new vigor in the development of environmental analytical chemistry, modernized the range of tools appli

  17. Cell-model prediction of the melting of a Lennard-Jones solid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holian, B.L.

    The classical free energy of the Lennard-Jones 6-12 solid is computed from a single-particle anharmonic cell model with a correction to the entropy given by the classical correlational entropy of quasiharmonic lattice dynamics. The free energy of the fluid is obtained from the Hansen-Ree analytic fit to Monte Carlo equation-of-state calculations. The resulting predictions of the solid-fluid coexistence curves by this corrected cell model of the solid are in excellent agreement with the computer experiments.

  18. CLOSURE REPORT FOR CORRECTIVE ACTION UNIT 528: POLYCHLORINATED BIPHENYLS CONTAMINATION NEVADA TEST SITE, NEVADA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BECHTEL NEVADA

    2006-09-01

    This Closure Report (CR) describes the closure activities performed at CAU 528, Polychlorinated Biphenyls Contamination, as presented in the Nevada Division of Environmental Protection (NDEP)-approved Corrective Action Plan (CAP) (US. Department of Energy, National Nuclear Security Administration Nevada Site Office [NNSAINSO], 2005). The approved closure alternative was closure in place with administrative controls. This CR provides a summary of the completed closure activities, documentation of waste disposal, and analytical data to confirm that the remediation goals were met.

  19. Learning in tele-autonomous systems using Soar

    NASA Technical Reports Server (NTRS)

    Laird, John E.; Yager, Eric S.; Tuck, Christopher M.; Hucka, Michael

    1989-01-01

    Robo-Soar is a high-level robot arm control system implemented in Soar. Robo-Soar learns to perform simple block manipulation tasks using advice from a human. Following learning, the system is able to perform similar tasks without external guidance. It can also learn to correct its knowledge, using its own problem solving in addition to outside guidance. Robo-Soar corrects its knowledge by accepting advice about relevance of features in its domain, using a unique integration of analytic and empirical learning techniques.

  20. Detection limits of quantitative and digital PCR assays and their influence in presence-absence surveys of environmental DNA.

    PubMed

    Hunter, Margaret E; Dorazio, Robert M; Butterfield, John S S; Meigs-Friend, Gaia; Nico, Leo G; Ferrante, Jason A

    2017-03-01

    A set of universal guidelines is needed to determine the limit of detection (LOD) in PCR-based analyses of low-concentration DNA. In particular, environmental DNA (eDNA) studies require sensitive and reliable methods to detect rare and cryptic species through shed genetic material in environmental samples. Current strategies for assessing detection limits of eDNA are either too stringent or subjective, possibly resulting in biased estimates of species' presence. Here, a conservative LOD analysis grounded in analytical chemistry is proposed to correct for overestimated DNA concentrations predominantly caused by the concentration plateau, a nonlinear relationship between expected and measured DNA concentrations. We have used statistical criteria to establish formal mathematical models for both quantitative and droplet digital PCR. To assess the method, a new Grass Carp (Ctenopharyngodon idella) TaqMan assay was developed and tested on both PCR platforms using eDNA in water samples. The LOD adjustment reduced Grass Carp occupancy and detection estimates while increasing uncertainty-indicating that caution needs to be applied to eDNA data without LOD correction. Compared to quantitative PCR, digital PCR had higher occurrence estimates due to increased sensitivity and dilution of inhibitors at low concentrations. Without accurate LOD correction, species occurrence and detection probabilities based on eDNA estimates are prone to a source of bias that cannot be reduced by an increase in sample size or PCR replicates. Other applications also could benefit from a standardized LOD such as GMO food analysis and forensic and clinical diagnostics. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  1. A New Correction Technique for Strain-Gage Measurements Acquired in Transient-Temperature Environments

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance

    1996-01-01

    Significant strain-gage errors may exist in measurements acquired in transient-temperature environments if conventional correction methods are applied. As heating or cooling rates increase, temperature gradients between the strain-gage sensor and substrate surface increase proportionally. These temperature gradients introduce strain-measurement errors that are currently neglected in both conventional strain-correction theory and practice. Therefore, the conventional correction theory has been modified to account for these errors. A new experimental method has been developed to correct strain-gage measurements acquired in environments experiencing significant temperature transients. The new correction technique has been demonstrated through a series of tests in which strain measurements were acquired for temperature-rise rates ranging from 1 to greater than 100 degrees F/sec. Strain-gage data from these tests have been corrected with both the new and conventional methods and then compared with an analysis. Results show that, for temperature-rise rates greater than 10 degrees F/sec, the strain measurements corrected with the conventional technique produced strain errors that deviated from analysis by as much as 45 percent, whereas results corrected with the new technique were in good agreement with analytical results.

  2. Considerations regarding the validation of chromatographic mass spectrometric methods for the quantification of endogenous substances in forensics.

    PubMed

    Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra

    2018-02-01

    The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  4. An analytical X-ray CdTe detector response matrix for incomplete charge collection correction for photon energies up to 300 keV

    NASA Astrophysics Data System (ADS)

    Kurková, Dana; Judas, Libor

    2018-05-01

    Gamma and X-ray energy spectra measured with semiconductor detectors suffer from various distortions, one of them being so-called "tailing" caused by an incomplete charge collection. Using the Hecht equation, a response matrix of size 321 × 321 was constructed which was used to correct the effect of incomplete charge collection. The correction matrix was constructed analytically for an arbitrary energy bin and the size of the energy bin thus defines the width of the spectral window. The correction matrix can be applied separately from other possible spectral corrections or it can be incorporated into an already existing response matrix of the detector. The correction was tested and its adjustable parameters were optimized on the line spectra of 57Co measured with a cadmium telluride (CdTe) detector in a spectral range from 0 up to 160 keV. The best results were obtained when the values of the free path of holes were spread over a range from 0.4 to 1.0 cm and weighted by a Gauss function. The model with the optimized parameter values was then used to correct the line spectra of 152Eu in a spectral range from 0 up to 530 keV. An improvement in the energy resolution at full width at half maximum from 2.40 % ± 0.28 % to 0.96 % ± 0.28 % was achieved at 344.27 keV. Spectra of "narrow spectrum series" beams, N120, N150, N200, N250 and N300, generated with tube voltages of 120 kV, 150 kV, 200 kV, 250 kV and 300 kV respectively, and measured with the CdTe detector, were corrected in the spectral range from 0 to 160 keV (N120 and N150) and from 0 to 530 keV (N200, N250, N300). All the measured spectra correspond both qualitatively and quantitatively to the available reference data after the correction. To obtain better correspondence between N150, N200, N250 and N300 spectra and the reference data, lower values of the free paths of holes (range from 0.16 to 0.65 cm) were used for X-ray spectra correction, which suggests energy dependence of the phenomenon.

  5. Development of a drift-correction procedure for a direct-reading spectrometer

    NASA Technical Reports Server (NTRS)

    Chapman, G. B., II; Gordon, W. A.

    1977-01-01

    A procedure which provides automatic correction for drifts in the radiometric sensitivity of each detector channel in a direct-reading emission spectrometer is described. Such drifts are customarily controlled by the regular analyses of standards, which provide corrections for changes in the excitational, optical, and electronic components of the instrument. This standardization procedure, however, corrects for the optical and electronic drifts. It is a step that must be taken if the time, effort, and cost of processing standards is to be minimized. This method of radiometric drift correction uses a 1,000-W tungsten-halogen reference lamp to illuminate each detector through the same optical path as that traversed during sample analysis. The responses of the detector channels to this reference light are regularly compared with channel response to the same light intensity at the time of analytical calibration in order to determine and correct for drift. Except for placing the lamp in position, the procedure is fully automated and compensates for changes in spectral intensity due to variations in lamp current. A discussion of the implementation of this drift-correction system is included.

  6. An Improved Correction for Range Restricted Correlations Under Extreme, Monotonic Quadratic Nonlinearity and Heteroscedasticity.

    PubMed

    Culpepper, Steven Andrew

    2016-06-01

    Standardized tests are frequently used for selection decisions, and the validation of test scores remains an important area of research. This paper builds upon prior literature about the effect of nonlinearity and heteroscedasticity on the accuracy of standard formulas for correcting correlations in restricted samples. Existing formulas for direct range restriction require three assumptions: (1) the criterion variable is missing at random; (2) a linear relationship between independent and dependent variables; and (3) constant error variance or homoscedasticity. The results in this paper demonstrate that the standard approach for correcting restricted correlations is severely biased in cases of extreme monotone quadratic nonlinearity and heteroscedasticity. This paper offers at least three significant contributions to the existing literature. First, a method from the econometrics literature is adapted to provide more accurate estimates of unrestricted correlations. Second, derivations establish bounds on the degree of bias attributed to quadratic functions under the assumption of a monotonic relationship between test scores and criterion measurements. New results are presented on the bias associated with using the standard range restriction correction formula, and the results show that the standard correction formula yields estimates of unrestricted correlations that deviate by as much as 0.2 for high to moderate selectivity. Third, Monte Carlo simulation results demonstrate that the new procedure for correcting restricted correlations provides more accurate estimates in the presence of quadratic and heteroscedastic test score and criterion relationships.

  7. Effects of correcting in situ ruminal microbial colonization of feed particles on the relationship between ruminally undegraded and intestinally digested crude protein in concentrate feeds.

    PubMed

    González, Javier; Mouhbi, Rabiaa; Guevara-González, Jesús Alberto; Arroyo, José María

    2018-02-01

    In situ estimates of ruminally undegraded protein (RUP) and intestinally digested protein (IDP) of ten concentrates, uncorrected or corrected for the ruminal microbial colonization, were used to examine the effects of this correction on the relationship between IDP and RUP values. Both variables were established for three rumen and duodenum cannulated wethers using 15 N labeling-techniques and considering measured rates of ruminal particle comminution (k c ) and outflow (k p ). A covariance analysis showed that the close relationship found between both variables (IDP = -0.0132 ± 0.00679 + 0.776 ± 0.0002 RUP; n = 60; P < 0.001; r = 0.960) is not affected by correcting for microbial colonization (P = 0.682). The IDP content in concentrates and industrial by-products can be predicted from RUP values, thus avoiding the laborious and complex procedure of determining intestinal digestibility; however, a larger sample of feeds is necessary to achieve more accurate predictions. The lack of influence of the correction for microbial contamination on the prediction observed in the present study increases the data available for this prediction. However, only the use of corrected values may provide an accurate evaluation. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  8. Determination of trace rare earth elements in gadolinium aluminate by inductively coupled plasma time of flight mass spectrometry

    NASA Astrophysics Data System (ADS)

    Saha, Abhijit; Deb, S. B.; Nagar, B. K.; Saxena, M. K.

    An analytical methodology was developed for the precise quantification of ten trace rare earth elements (REEs), namely, La, Ce, Pr, Nd, Sm, Eu, Tb, Dy, Ho, and Tm, in gadolinium aluminate (GdAlO3) employing an ultrasonic nebulizer (USN)-desolvating device based inductively coupled plasma mass spectrometry (ICP-MS). A microwave digestion procedure was optimized for digesting 100 mg of the refractory oxide using a mixture of sulphuric acid (H2SO4), phosphoric acid (H3PO4) and water (H2O) with 1400 W power, 10 min ramp and 60 min hold time. An USN-desolvating sample introduction system was employed to enhance analyte sensitivities by minimizing their oxide ion formation in the plasma. Studies on the effect of various matrix concentrations on the analyte intensities revealed that precise quantification of the analytes was possible with matrix level of 250 mg L- 1. The possibility of using indium as an internal standard was explored and applied to correct for matrix effect and variation in analyte sensitivity under plasma operating conditions. Individual oxide ion formation yields were determined in matrix matched solution and employed for correcting polyatomic interferences of light REE (LREE) oxide ions on the intensities of middle and heavy rare earth elements (MREEs and HREEs). Recoveries of ≥ 90% were achieved for the analytes employing standard addition technique. Three real samples were analyzed for traces of REEs by the proposed method and cross validated for Eu and Nd by isotope dilution mass spectrometry (IDMS). The results show no significant difference in the values at 95% confidence level. The expanded uncertainty (coverage factor 1σ) in the determination of trace REEs in the samples were found to be between 3 and 8%. The instrument detection limits (IDLs) and the method detection limits (MDLs) for the ten REEs lie in the ranges 1-5 ng L- 1 and 7-64 μg kg- 1 respectively.

  9. Dual processing theory and experts' reasoning: exploring thinking on national multiple-choice questions.

    PubMed

    Durning, Steven J; Dong, Ting; Artino, Anthony R; van der Vleuten, Cees; Holmboe, Eric; Schuwirth, Lambert

    2015-08-01

    An ongoing debate exists in the medical education literature regarding the potential benefits of pattern recognition (non-analytic reasoning), actively comparing and contrasting diagnostic options (analytic reasoning) or using a combination approach. Studies have not, however, explicitly explored faculty's thought processes while tackling clinical problems through the lens of dual process theory to inform this debate. Further, these thought processes have not been studied in relation to the difficulty of the task or other potential mediating influences such as personal factors and fatigue, which could also be influenced by personal factors such as sleep deprivation. We therefore sought to determine which reasoning process(es) were used with answering clinically oriented multiple-choice questions (MCQs) and if these processes differed based on the dual process theory characteristics: accuracy, reading time and answering time as well as psychometrically determined item difficulty and sleep deprivation. We performed a think-aloud procedure to explore faculty's thought processes while taking these MCQs, coding think-aloud data based on reasoning process (analytic, nonanalytic, guessing or combination of processes) as well as word count, number of stated concepts, reading time, answering time, and accuracy. We also included questions regarding amount of work in the recent past. We then conducted statistical analyses to examine the associations between these measures such as correlations between frequencies of reasoning processes and item accuracy and difficulty. We also observed the total frequencies of different reasoning processes in the situations of getting answers correctly and incorrectly. Regardless of whether the questions were classified as 'hard' or 'easy', non-analytical reasoning led to the correct answer more often than to an incorrect answer. Significant correlations were found between self-reported recent number of hours worked with think-aloud word count and number of concepts used in the reasoning but not item accuracy. When all MCQs were included, 19 % of the variance of correctness could be explained by the frequency of expression of these three think-aloud processes (analytic, nonanalytic, or combined). We found evidence to support the notion that the difficulty of an item in a test is not a systematic feature of the item itself but is always a result of the interaction between the item and the candidate. Use of analytic reasoning did not appear to improve accuracy. Our data suggest that individuals do not apply either System 1 or System 2 but instead fall along a continuum with some individuals falling at one end of the spectrum.

  10. Analytic Steering: Inserting Context into the Information Dialog

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohn, Shawn J.; Calapristi, Augustin J.; Brown, Shyretha D.

    2011-10-23

    An analyst’s intrinsic domain knowledge is a primary asset in almost any analysis task. Unstructured text analysis systems that apply un-supervised content analysis approaches can be more effective if they can leverage this domain knowledge in a manner that augments the information discovery process without obfuscating new or unexpected content. Current unsupervised approaches rely upon the prowess of the analyst to submit the right queries or observe generalized document and term relationships from ranked or visual results. We propose a new approach which allows the user to control or steer the analytic view within the unsupervised space. This process ismore » controlled through the data characterization process via user supplied context in the form of a collection of key terms. We show that steering with an appropriate choice of key terms can provide better relevance to the analytic domain and still enable the analyst to uncover un-expected relationships; this paper discusses cases where various analytic steering approaches can provide enhanced analysis results and cases where analytic steering can have a negative impact on the analysis process.« less

  11. Stabilizing potentials in bound state analytic continuation methods for electronic resonances in polyatomic molecules

    DOE PAGES

    White, Alec F.; Head-Gordon, Martin; McCurdy, C. William

    2017-01-30

    The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less

  12. Stabilizing potentials in bound state analytic continuation methods for electronic resonances in polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Alec F.; Head-Gordon, Martin; McCurdy, C. William

    The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less

  13. Analytical reasoning task reveals limits of social learning in networks

    PubMed Central

    Rahwan, Iyad; Krasnoshtan, Dmytro; Shariff, Azim; Bonnefon, Jean-François

    2014-01-01

    Social learning—by observing and copying others—is a highly successful cultural mechanism for adaptation, outperforming individual information acquisition and experience. Here, we investigate social learning in the context of the uniquely human capacity for reflective, analytical reasoning. A hallmark of the human mind is its ability to engage analytical reasoning, and suppress false associative intuitions. Through a set of laboratory-based network experiments, we find that social learning fails to propagate this cognitive strategy. When people make false intuitive conclusions and are exposed to the analytic output of their peers, they recognize and adopt this correct output. But they fail to engage analytical reasoning in similar subsequent tasks. Thus, humans exhibit an ‘unreflective copying bias’, which limits their social learning to the output, rather than the process, of their peers’ reasoning—even when doing so requires minimal effort and no technical skill. In contrast to much recent work on observation-based social learning, which emphasizes the propagation of successful behaviour through copying, our findings identify a limit on the power of social networks in situations that require analytical reasoning. PMID:24501275

  14. Destructive analysis capabilities for plutonium and uranium characterization at Los Alamos National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tandon, Lav; Kuhn, Kevin J; Drake, Lawrence R

    Los Alamos National Laboratory's (LANL) Actinide Analytical Chemistry (AAC) group has been in existence since the Manhattan Project. It maintains a complete set of analytical capabilities for performing complete characterization (elemental assay, isotopic, metallic and non metallic trace impurities) of uranium and plutonium samples in different forms. For a majority of the customers there are strong quality assurance (QA) and quality control (QC) objectives including highest accuracy and precision with well defined uncertainties associated with the analytical results. Los Alamos participates in various international and national programs such as the Plutonium Metal Exchange Program, New Brunswick Laboratory's (NBL' s) Safeguardsmore » Measurement Evaluation Program (SME) and several other inter-laboratory round robin exercises to monitor and evaluate the data quality generated by AAC. These programs also provide independent verification of analytical measurement capabilities, and allow any technical problems with analytical measurements to be identified and corrected. This presentation will focus on key analytical capabilities for destructive analysis in AAC and also comparative data between LANL and peer groups for Pu assay and isotopic analysis.« less

  15. Analytical reasoning task reveals limits of social learning in networks.

    PubMed

    Rahwan, Iyad; Krasnoshtan, Dmytro; Shariff, Azim; Bonnefon, Jean-François

    2014-04-06

    Social learning-by observing and copying others-is a highly successful cultural mechanism for adaptation, outperforming individual information acquisition and experience. Here, we investigate social learning in the context of the uniquely human capacity for reflective, analytical reasoning. A hallmark of the human mind is its ability to engage analytical reasoning, and suppress false associative intuitions. Through a set of laboratory-based network experiments, we find that social learning fails to propagate this cognitive strategy. When people make false intuitive conclusions and are exposed to the analytic output of their peers, they recognize and adopt this correct output. But they fail to engage analytical reasoning in similar subsequent tasks. Thus, humans exhibit an 'unreflective copying bias', which limits their social learning to the output, rather than the process, of their peers' reasoning-even when doing so requires minimal effort and no technical skill. In contrast to much recent work on observation-based social learning, which emphasizes the propagation of successful behaviour through copying, our findings identify a limit on the power of social networks in situations that require analytical reasoning.

  16. Analytical and numerical treatment of drift-tearing modes in plasma slab

    NASA Astrophysics Data System (ADS)

    Mirnov, V. V.; Hegna, C. C.; Sovinec, C. R.; Howell, E. C.

    2016-10-01

    Two-fluid corrections to linear tearing modes includes 1) diamagnetic drifts that reduce the growth rate and 2) electron and ion decoupling on short scales that can lead to fast reconnection. We have recently developed an analytical model that includes effects 1) and 2) and important contribution from finite electron parallel thermal conduction. Both the tendencies 1) and 2) are confirmed by an approximate analytic dispersion relation that is derived using a perturbative approach of small ion-sound gyroradius ρs. This approach is only valid at the beginning of the transition from the collisional to semi-collisional regimes. Further analytical and numerical work is performed to cover the full interval of ρs connecting these two limiting cases. Growth rates are computed from analytic theory with a shooting method. They match the resistive MHD regime with the dispersion relations known at asymptotically large ion-sound gyroradius. A comparison between this analytical treatment and linear numerical simulations using the NIMROD code with cold ions and hot electrons in plasma slab is reported. The material is based on work supported by the U.S. DOE and NSF.

  17. Understanding organizational commitment: A meta-analytic examination of the roles of the five-factor model of personality and culture.

    PubMed

    Choi, Daejeong; Oh, In-Sue; Colbert, Amy E

    2015-09-01

    We examined the relationships between the Five-Factor Model (FFM) of personality traits and three forms of organizational commitment (affective, normative, and continuance commitment) and their variability across individualistic and collectivistic cultures. Meta-analytic results based on 55 independent samples from 50 studies (N = 18,262) revealed that (a) all FFM traits had positive relationships with affective commitment; (b) all FFM traits had positive relationships with normative commitment; and (c) Emotional Stability, Extraversion, and Openness to Experience had negative relationships with continuance commitment. In particular, Agreeableness was found to be the trait most strongly related to both affective and normative commitment. The results also showed that Agreeableness had stronger relationships with affective and normative commitment in collectivistic cultures than in individualistic cultures. We provide theoretical and practical implications of these findings for personality, job attitudes, and employee selection and retention. (c) 2015 APA, all rights reserved).

  18. Application of the correlation constrained multivariate curve resolution alternating least-squares method for analyte quantitation in the presence of unexpected interferences using first-order instrumental data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà

    2010-03-01

    Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.

  19. Validation of Analytical Damping Ratio by Fatigue Stress Limit

    NASA Astrophysics Data System (ADS)

    Foong, Faruq Muhammad; Chung Ket, Thein; Beng Lee, Ooi; Aziz, Abdul Rashid Abdul

    2018-03-01

    The optimisation process of a vibration energy harvester is usually restricted to experimental approaches due to the lack of an analytical equation to describe the damping of a system. This study derives an analytical equation, which describes the first mode damping ratio of a clamp-free cantilever beam under harmonic base excitation by combining the transverse equation of motion of the beam with the damping-stress equation. This equation, as opposed to other common damping determination methods, is independent of experimental inputs or finite element simulations and can be solved using a simple iterative convergence method. The derived equation was determined to be correct for cases when the maximum bending stress in the beam is below the fatigue limit stress of the beam. However, an increasing trend in the error between the experiment and the analytical results were observed at high stress levels. Hence, the fatigue limit stress was used as a parameter to define the validity of the analytical equation.

  20. Fabrication strategies, sensing modes and analytical applications of ratiometric electrochemical biosensors.

    PubMed

    Jin, Hui; Gui, Rijun; Yu, Jianbo; Lv, Wei; Wang, Zonghua

    2017-05-15

    Previously developed electrochemical biosensors with single-electric signal output are probably affected by intrinsic and extrinsic factors. In contrast, the ratiometric electrochemical biosensors (RECBSs) with dual-electric signal outputs have an intrinsic built-in correction to the effects from system or background electric signals, and therefore exhibit a significant potential to improve the accuracy and sensitivity in electrochemical sensing applications. In this review, we systematically summarize the fabrication strategies, sensing modes and analytical applications of RECBSs. First, the different fabrication strategies of RECBSs were introduced, referring to the analytes-induced single- and dual-dependent electrochemical signal strategies for RECBSs. Second, the different sensing modes of RECBSs were illustrated, such as differential pulse voltammetry, square wave voltammetry, cyclic voltammetry, alternating current voltammetry, electrochemiluminescence, and so forth. Third, the analytical applications of RECBSs were discussed based on the types of target analytes. Finally, the forthcoming development and future prospects in the research field of RECBSs were also highlighted. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Analysis and Correction of Diffraction Effect on the B/A Measurement at High Frequencies

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Gong, Xiu-Fen; Liu, Xiao-Zhou; Kushibiki, Jun-ichi; Nishino, Hideo

    2004-01-01

    A numerical method is developed to analyse and to correct the diffraction effect in the measurement of acoustic nonlinearity parameter B/A at high frequencies. By using the KZK nonlinear equation and the superposition approach of Gaussian beams, an analytical model is derived to describe the second harmonic generation through multi-layer medium SiO2/liquid specimen/SiO2. Frequency dependence of the nonlinear characterization curve for water in 110-155 MHz is numerically and experimentally investigated. With the measured dip position and the new model, values of B/A for water are evaluated. The results show that the present method can effectively correct the diffraction effect in the measurement.

  2. Discrimination of almonds (Prunus dulcis) geographical origin by minerals and fatty acids profiling.

    PubMed

    Amorello, Diana; Orecchio, Santino; Pace, Andrea; Barreca, Salvatore

    2016-09-01

    Twenty-one almond samples from three different geographical origins (Sicily, Spain and California) were investigated by determining minerals and fatty acids compositions. Data were used to discriminate by chemometry almond origin by linear discriminant analysis. With respect to previous PCA profiling studies, this work provides a simpler analytical protocol for the identification of almonds geographical origin. Classification by using mineral contents data only was correct in 77% of the samples, while, by using fatty acid profiles, the percentages of samples correctly classified reached 82%. The coupling of mineral contents and fatty acid profiles lead to an increased efficiency of the classification with 87% of samples correctly classified.

  3. Time delay of critical images of a point source near the gravitational lens fold-caustic

    NASA Astrophysics Data System (ADS)

    Alexandrov, A.; Zhdanov, V.

    2016-06-01

    Within the framework of the analytical theory of the gravitational lensing we derive asymptotic formula for the time delay of critical images of apoint source, which is situated near a fold-caustic. We found corrections of the first and second order in powers of a parameter, which describescloseness of the source to the caustic. Our formula modifies earlier result by Congdon, Keeton &Nordgren (MNRAS, 2008) obtained in zero-orderapproximation. We have proved the hypothesis put forward by these authors that the first-order correction to the relative time delay of two criticalmages is identically zero. The contribution of the corrections is illustrated in model example by comparison with exact expression.

  4. NHEXAS PHASE I REGION 5 STUDY--METALS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    This data set includes analytical results for measurements of metals in 165 blood samples. These samples were collected to examine the relationships between personal exposure measurements, environmental measurements, and body burden. Venous blood samples were collected by venipun...

  5. Diagnosis, referral, and rehabilitation within the Fairfax Alcohol Safety Action Project, 1974.

    DOT National Transportation Integrated Search

    1975-01-01

    This report is a combination of Analytic Study #5 (Diagnosis and Referral) and Analytic Study #6 (Rehabilitation). Data concerning these countermeasures are presented together because of their very close relationship within the Fairfax ASAP. Both the...

  6. NHEXAS PHASE I REGION 5 STUDY--VOCS IN BLOOD ANALYTICAL RESULTS

    EPA Science Inventory

    This data set includes analytical results for measurements of VOCs (volatile organic compounds) in 145 blood samples. These samples were collected to examine the relationships between personal exposure measurements, environmental measurements, and body burden. Venous blood sample...

  7. Evaluating Corrective Feedback Self-Efficacy Changes among Counselor Educators and Site Supervisors

    ERIC Educational Resources Information Center

    Motley, Veronica; Reese, Mary Kate; Campos, Peter

    2014-01-01

    Analysis of pretest-posttest scores on the Corrective Feedback Self-Efficacy Instrument (Page & Hulse-Killacky, [Page, B. J., 1999]) following a supervision workshop indicated a significant positive relationship between workshop training and supervisors' feedback self-efficacy in giving corrective feedback. Furthermore, the association…

  8. Treatment for Drug Abusing Offenders during Correctional Supervision: A Nationwide Overview.

    ERIC Educational Resources Information Center

    Lipton, Douglas S.

    1998-01-01

    Discusses problems associated with the increasing number of drug offenders in U.S. correctional institutions. Explores the relationship between drugs and crime, the history of treating drug-using offenders, and efforts afoot in the Federal Correctional Options Program and the Bureau of Prisons to rehabilitate these prisoners. (MKA)

  9. Empirical Analysis of the Subjective Impressions and Objective Measures of Domain Scientists’ Visual Analytic Judgments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik

    2017-05-08

    Scientists often use specific data analysis and presentation methods familiar within their domain. But does high familiarity drive better analytical judgment? This question is especially relevant when familiar methods themselves can have shortcomings: many visualizations used conventionally for scientific data analysis and presentation do not follow established best practices. This necessitates new methods that might be unfamiliar yet prove to be more effective. But there is little empirical understanding of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their visual analytic judgments. To address this gap and to study these factors, we focusmore » on visualizations used for comparison of climate model performance. We report on a comprehensive survey-based user study with 47 climate scientists and present an analysis of : i) relationships among scientists’ familiarity, their perceived lev- els of comfort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less

  10. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    PubMed Central

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-01-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol−1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning. PMID:24320250

  11. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: an accurate correction scheme for electrostatic finite-size effects.

    PubMed

    Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol(-1)). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.

  12. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    NASA Astrophysics Data System (ADS)

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-11-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol-1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.

  13. Computation of misalignment and primary mirror astigmatism figure error of two-mirror telescopes

    NASA Astrophysics Data System (ADS)

    Gu, Zhiyuan; Wang, Yang; Ju, Guohao; Yan, Changxiang

    2018-01-01

    Active optics usually uses the computation models based on numerical methods to correct misalignments and figure errors at present. These methods can hardly lead to any insight into the aberration field dependencies that arise in the presence of the misalignments. An analytical alignment model based on third-order nodal aberration theory is presented for this problem, which can be utilized to compute the primary mirror astigmatic figure error and misalignments for two-mirror telescopes. Alignment simulations are conducted for an R-C telescope based on this analytical alignment model. It is shown that in the absence of wavefront measurement errors, wavefront measurements at only two field points are enough, and the correction process can be completed with only one alignment action. In the presence of wavefront measurement errors, increasing the number of field points for wavefront measurements can enhance the robustness of the alignment model. Monte Carlo simulation shows that, when -2 mm ≤ linear misalignment ≤ 2 mm, -0.1 deg ≤ angular misalignment ≤ 0.1 deg, and -0.2 λ ≤ astigmatism figure error (expressed as fringe Zernike coefficients C5 / C6, λ = 632.8 nm) ≤0.2 λ, the misaligned systems can be corrected to be close to nominal state without wavefront testing error. In addition, the root mean square deviation of RMS wavefront error of all the misaligned samples after being corrected is linearly related to wavefront testing error.

  14. Corrective Action Decision Document for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada: Revision 0, Including Errata Sheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2004-04-01

    This Corrective Action Decision Document identifies the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's corrective action alternative recommendation for each of the corrective action sites (CASs) within Corrective Action Unit (CAU) 204: Storage Bunkers, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. An evaluation of analytical data from the corrective action investigation, review of current and future operations at each CAS, and a detailed comparative analysis of potential corrective action alternatives were used to determine the appropriate corrective action for each CAS. There are six CASs in CAU 204, which aremore » all located between Areas 1, 2, 3, and 5 on the NTS. The No Further Action alternative was recommended for CASs 01-34-01, 02-34-01, 03-34-01, and 05-99-02; and a Closure in Place with Administrative Controls recommendation was the preferred corrective action for CASs 05-18-02 and 05-33-01. These alternatives were judged to meet all requirements for the technical components evaluated as well as applicable state and federal regulations for closure of the sites and will eliminate potential future exposure pathways to the contaminated media at CAU 204.« less

  15. Organizational stressors associated with job stress and burnout in correctional officers: a systematic review.

    PubMed

    Finney, Caitlin; Stergiopoulos, Erene; Hensel, Jennifer; Bonato, Sarah; Dewa, Carolyn S

    2013-01-29

    In adult correctional facilities, correctional officers (COs) are responsible for the safety and security of the facility in addition to aiding in offender rehabilitation and preventing recidivism. COs experience higher rates of job stress and burnout that stem from organizational stressors, leading to negative outcomes for not only the CO but the organization as well. Effective interventions could aim at targeting organizational stressors in order to reduce these negative outcomes as well as COs' job stress and burnout. This paper fills a gap in the organizational stress literature among COs by systematically reviewing the relationship between organizational stressors and CO stress and burnout in adult correctional facilities. In doing so, the present review identifies areas that organizational interventions can target in order to reduce CO job stress and burnout. A systematic search of the literature was conducted using Medline, PsycINFO, Criminal Justice Abstracts, and Sociological Abstracts. All retrieved articles were independently screened based on criteria developed a priori. All included articles underwent quality assessment. Organizational stressors were categorized according to Cooper and Marshall's (1976) model of job stress. The systematic review yielded 8 studies that met all inclusion and quality assessment criteria. The five categories of organizational stressors among correctional officers are: stressors intrinsic to the job, role in the organization, rewards at work, supervisory relationships at work and the organizational structure and climate. The organizational structure and climate was demonstrated to have the most consistent relationship with CO job stress and burnout. The results of this review indicate that the organizational structure and climate of correctional institutions has the most consistent relationship with COs' job stress and burnout. Limitations of the studies reviewed include the cross-sectional design and the use of varying measures for organizational stressors. The results of this review indicate that interventions should aim to improve the organizational structure and climate of the correctional facility by improving communication between management and COs.

  16. Organizational stressors associated with job stress and burnout in correctional officers: a systematic review

    PubMed Central

    2013-01-01

    Background In adult correctional facilities, correctional officers (COs) are responsible for the safety and security of the facility in addition to aiding in offender rehabilitation and preventing recidivism. COs experience higher rates of job stress and burnout that stem from organizational stressors, leading to negative outcomes for not only the CO but the organization as well. Effective interventions could aim at targeting organizational stressors in order to reduce these negative outcomes as well as COs’ job stress and burnout. This paper fills a gap in the organizational stress literature among COs by systematically reviewing the relationship between organizational stressors and CO stress and burnout in adult correctional facilities. In doing so, the present review identifies areas that organizational interventions can target in order to reduce CO job stress and burnout. Methods A systematic search of the literature was conducted using Medline, PsycINFO, Criminal Justice Abstracts, and Sociological Abstracts. All retrieved articles were independently screened based on criteria developed a priori. All included articles underwent quality assessment. Organizational stressors were categorized according to Cooper and Marshall’s (1976) model of job stress. Results The systematic review yielded 8 studies that met all inclusion and quality assessment criteria. The five categories of organizational stressors among correctional officers are: stressors intrinsic to the job, role in the organization, rewards at work, supervisory relationships at work and the organizational structure and climate. The organizational structure and climate was demonstrated to have the most consistent relationship with CO job stress and burnout. Conclusions The results of this review indicate that the organizational structure and climate of correctional institutions has the most consistent relationship with COs’ job stress and burnout. Limitations of the studies reviewed include the cross-sectional design and the use of varying measures for organizational stressors. The results of this review indicate that interventions should aim to improve the organizational structure and climate of the correctional facility by improving communication between management and COs. PMID:23356379

  17. The Relationship of Error and Correction of Error in Oral Reading to Visual-Form Perception and Word Attack Skills.

    ERIC Educational Resources Information Center

    Clayman, Deborah P. Goldweber

    The ability of 100 second-grade boys and girls to self-correct oral reading errors was studied in relationship to visual-form perception, phonic skills, response speed, and reading level. Each child was tested individually with the Bender-Error Test, the Gray Oral Paragraphs, and the Roswell-Chall Diagnostic Reading Test and placed into a group of…

  18. Distortion of Magnetic Fields in a Starless Core. III. Polarization–Extinction Relationship in FeSt 1-457

    NASA Astrophysics Data System (ADS)

    Kandori, Ryo; Tamura, Motohide; Nagata, Tetsuya; Tomisaka, Kohji; Kusakabe, Nobuhiko; Nakajima, Yasushi; Kwon, Jungmi; Nagayama, Takahiro; Tatematsu, Ken’ichi

    2018-04-01

    The relationship between dust polarization and extinction was determined for the cold dense starless molecular cloud core FeSt 1-457 based on the background star polarimetry of dichroic extinction at near-infrared wavelengths. Owing to the known (three-dimensional) magnetic field structure, the observed polarizations from the core were corrected by considering (a) the subtraction of the ambient polarization component, (b) the depolarization effect of inclined distorted magnetic fields, and (c) the magnetic inclination angle of the core. After these corrections, a linear relationship between polarization and extinction was obtained for the core in the range up to A V ≈ 20 mag. The initial polarization versus extinction diagram changed dramatically after the corrections of (a) to (c), with the correlation coefficient being refined from 0.71 to 0.79. These corrections should affect the theoretical interpretation of the observational data. The slope of the finally obtained polarization–extinction relationship is {P}H/{E}H-{Ks}=11.00+/- 0.72 % {mag}}-1, which is close to the statistically estimated upper limit of the interstellar polarization efficiency. This consistency suggests that the upper limit of interstellar polarization efficiency might be determined by the observational viewing angle toward polarized astronomical objects.

  19. Investigation of the Relationship between Illogical Thoughts and Dependence on Others and Marriage Compatibility in the Iranian Veterans Exposed to Chemicals in Iran-Iraq War

    PubMed Central

    Afkar, A. H.; Mahboubi, M.; Shahri, M. Neyakan; Mohamadi, M.; Jalilian, F.; Moradi, F.

    2014-01-01

    Background: Marital satisfaction is one of the main determinants of a family’s correct function. A large number of veterans have been reported to suffer from depression, anxiety, mood disorders, post-traumatic stress disorder, and physical disorders. The objective of this study is to examine association between Illogical thoughts and Dependence on Others and Marriage Compatibility in the Iranian Veterans Exposed to Chemicals in Iran-Iraq War. Methods: The present cross-sectional, analytical study was conducted on 200 veterans exposed to chemicals who were covered by the Foundation of Martyrs and Veterans Affairs, Gilangharb, Kermanshah, Iran. The study sample size was determined according to Krejcie and Morgan formula and the subjects were selected through random sampling. The study data were collected using marriage compatibility questionnaire, illogical thoughts questionnaire, and dependence on others questionnaire. The study data were analyzed using the SPSS statistical software (version18). Pearson correlation coefficient, multiple regression, and t-test were used in order to determine the relationships among the variables and compare the means. Results: The findings of the current study revealed no significant relationship between dependence on others, anxious attention, helplessness, avoiding problems, perfectionism, and autonomy and marriage compatibility. However, a significant relationship was found between failure and marriage compatibility. Discussion: Overall, the findings of the present study showed that the veterans of Gilangharb did not have disorders, but depended on others, particularly their spouses, due to their abnormal physical status. Sometimes, they cannot even do their personal tasks which results in their dependence on others eventually putting the veterans under pressure and stress. PMID:25168982

  20. Stripping Voltammetry

    NASA Astrophysics Data System (ADS)

    Lovrić, Milivoj

    Electrochemical stripping means the oxidative or reductive removal of atoms, ions, or compounds from an electrode surface (or from the electrode body, as in the case of liquid mercury electrodes with dissolved metals) [1-5]. In general, these atoms, ions, or compounds have been preliminarily immobilized on the surface of an inert electrode (or within it) as the result of a preconcentration step, while the products of the electrochemical stripping will dissolve in the electrolytic solution. Often the product of the electrochemical stripping is identical to the analyte before the preconcentration. However, there are exemptions to these rules. Electroanalytical stripping methods comprise two steps: first, the accumulation of a dissolved analyte onto, or in, the working electrode, and, second, the subsequent stripping of the accumulated substance by a voltammetric [3, 5], potentiometric [6, 7], or coulometric [8] technique. In stripping voltammetry, the condition is that there are two independent linear relationships: the first one between the activity of accumulated substance and the concentration of analyte in the sample, and the second between the maximum stripping current and the accumulated substance activity. Hence, a cumulative linear relationship between the maximum response and the analyte concentration exists. However, the electrode capacity for the analyte accumulation is limited and the condition of linearity is satisfied only well below the electrode saturation. For this reason, stripping voltammetry is used mainly in trace analysis. The limit of detection depends on the factor of proportionality between the activity of the accumulated substance and the bulk concentration of the analyte. This factor is a constant in the case of a chemical accumulation, but for electrochemical accumulation it depends on the electrode potential. The factor of proportionality between the maximum stripping current and the analyte concentration is rarely known exactly. In fact, it is frequently ignored. For the analysis it suffices to establish the linear relationship empirically. The slope of this relationship may vary from one sample to another because of different influences of the matrix. In this case the concentration of the analyte is determined by the method of standard additions [1]. After measuring the response of the sample, the concentration of the analyte is deliberately increased by adding a certain volume of its standard solution. The response is measured again, and this procedure is repeated three or four times. The unknown concentration is determined by extrapolation of the regression line to the concentration axis [9]. However, in many analytical methods, the final measurement is performed in a standard matrix that allows the construction of a calibration plot. Still, the slope of this plot depends on the active area of the working electrode surface. Each solid electrode needs a separate calibration plot, and that plot must be checked from time to time because of possible deterioration of the electrode surface [2].

  1. METHOD 8261: USING SURROGATES TO MEASURE MATRIX EFFECTS AND CORRECT ANALYTICAL RESULTS

    EPA Science Inventory

    Vacuum distillation uses a specialized apparatus. This apparatus has been developed and patented by
    the EPA. Through the Federal Technology Transfer Act this invention has been made available for commercialization. Available vendors for this instrumentation are being evaluat...

  2. Axial geometrical aberration correction up to 5th order with N-SYLC.

    PubMed

    Hoque, Shahedul; Ito, Hiroyuki; Takaoka, Akio; Nishi, Ryuji

    2017-11-01

    We present N-SYLC (N-fold symmetric line currents) models to correct 5th order axial geometrical aberrations in electron microscopes. In our previous paper, we showed that 3rd order spherical aberration can be corrected by 3-SYLC doublet. After that, mainly the 5th order aberrations remain to limit the resolution. In this paper, we extend the doublet to quadruplet models also including octupole and dodecapole fields for correcting these higher order aberrations, without introducing any new unwanted ones. We prove the validity of our models by analytical calculations. Also by computer simulations, we show that for beam energy of 5keV and initial angle 10mrad at the corrector object plane, beam size of less than 0.5nm is achieved at the corrector image plane. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Relativistic corrections to the form factors of Bc into P-wave orbitally excited charmonium

    NASA Astrophysics Data System (ADS)

    Zhu, Ruilin

    2018-06-01

    We investigated the form factors of the Bc meson into P-wave orbitally excited charmonium using the nonrelativistic QCD effective theory. Through the analytic computation, the next-to-leading order relativistic corrections to the form factors were obtained, and the asymptotic expressions were studied in the infinite bottom quark mass limit. Employing the general form factors, we discussed the exclusive decays of the Bc meson into P-wave orbitally excited charmonium and a light meson. We found that the relativistic corrections lead to a large correction for the form factors, which makes the branching ratios of the decay channels B (Bc ± →χcJ (hc) +π± (K±)) larger. These results are useful for the phenomenological analysis of the Bc meson decays into P-wave charmonium, which shall be tested in the LHCb experiments.

  4. Upgraded Analytical Model of the Cylinder Test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souers, P. Clark; Lauderbach, Lisa; Garza, Raul

    2013-03-15

    A Gurney-type equation was previously corrected for wall thinning and angle of tilt, and now we have added shock wave attenuation in the copper wall and air gap energy loss. Extensive calculations were undertaken to calibrate the two new energy loss mechanisms across all explosives. The corrected Gurney equation is recommended for cylinder use over the original 1943 form. The effect of these corrections is to add more energy to the adiabat values from a relative volume of 2 to 7, with low energy explosives having the largest correction. The data was pushed up to a relative volume of aboutmore » 15 and the JWL parameter ω was obtained directly. The total detonation energy density was locked to the v=7 adiabat energy density, so that the Cylinder test gives all necessary values needed to make a JWL.« less

  5. Upgraded Analytical Model of the Cylinder Test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Souers, P. Clark; Lauderbach, Lisa; Garza, Raul

    2013-03-15

    A Gurney-type equation was previously corrected for wall thinning and angle of tilt, and now we have added shock wave attenuation in the copper wall and air gap energy loss. Extensive calculations were undertaken to calibrate the two new energy loss mechanisms across all explosives. The corrected Gurney equation is recommended for cylinder use over the original 1943 form. The effect of these corrections is to add more energy to the adiabat values from a relative volume of 2 to 7, with low energy explosives having the largest correction. The data was pushed up to a relative volume of aboutmore » 15 and the JWL parameter ω was obtained directly. Finally, the total detonation energy density was locked to the v = 7 adiabat energy density, so that the Cylinder test gives all necessary values needed to make a JWL.« less

  6. Correction of gene expression data: Performance-dependency on inter-replicate and inter-treatment biases.

    PubMed

    Darbani, Behrooz; Stewart, C Neal; Noeparvar, Shahin; Borg, Søren

    2014-10-20

    This report investigates for the first time the potential inter-treatment bias source of cell number for gene expression studies. Cell-number bias can affect gene expression analysis when comparing samples with unequal total cellular RNA content or with different RNA extraction efficiencies. For maximal reliability of analysis, therefore, comparisons should be performed at the cellular level. This could be accomplished using an appropriate correction method that can detect and remove the inter-treatment bias for cell-number. Based on inter-treatment variations of reference genes, we introduce an analytical approach to examine the suitability of correction methods by considering the inter-treatment bias as well as the inter-replicate variance, which allows use of the best correction method with minimum residual bias. Analyses of RNA sequencing and microarray data showed that the efficiencies of correction methods are influenced by the inter-treatment bias as well as the inter-replicate variance. Therefore, we recommend inspecting both of the bias sources in order to apply the most efficient correction method. As an alternative correction strategy, sequential application of different correction approaches is also advised. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Isotope Inversion Experiment evaluating the suitability of calibration in surrogate matrix for quantification via LC-MS/MS-Exemplary application for a steroid multi-method.

    PubMed

    Suhr, Anna Catharina; Vogeser, Michael; Grimm, Stefanie H

    2016-05-30

    For quotable quantitative analysis of endogenous analytes in complex biological samples by isotope dilution LC-MS/MS, the creation of appropriate calibrators is a challenge, since analyte-free authentic material is in general not available. Thus, surrogate matrices are often used to prepare calibrators and controls. However, currently employed validation protocols do not include specific experiments to verify the suitability of a surrogate matrix calibration for quantification of authentic matrix samples. The aim of the study was the development of a novel validation experiment to test whether surrogate matrix based calibrators enable correct quantification of authentic matrix samples. The key element of the novel validation experiment is the inversion of nonlabelled analytes and their stable isotope labelled (SIL) counterparts in respect to their functions, i.e. SIL compound is the analyte and nonlabelled substance is employed as internal standard. As a consequence, both surrogate and authentic matrix are analyte-free regarding SIL analytes, which allows a comparison of both matrices. We called this approach Isotope Inversion Experiment. As figure of merit we defined the accuracy of inverse quality controls in authentic matrix quantified by means of a surrogate matrix calibration curve. As a proof-of-concept application a LC-MS/MS assay addressing six corticosteroids (cortisol, cortisone, corticosterone, 11-deoxycortisol, 11-deoxycorticosterone, and 17-OH-progesterone) was chosen. The integration of the Isotope Inversion Experiment in the validation protocol for the steroid assay was successfully realized. The accuracy results of the inverse quality controls were all in all very satisfying. As a consequence the suitability of a surrogate matrix calibration for quantification of the targeted steroids in human serum as authentic matrix could be successfully demonstrated. The Isotope Inversion Experiment fills a gap in the validation process for LC-MS/MS assays quantifying endogenous analytes. We consider it a valuable and convenient tool to evaluate the correct quantification of authentic matrix samples based on a calibration curve in surrogate matrix. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Inversion of the anomalous diffraction approximation for variable complex index of refraction near unity. [numerical tests for water-haze aerosol model

    NASA Technical Reports Server (NTRS)

    Smith, C. B.

    1982-01-01

    The Fymat analytic inversion method for retrieving a particle-area distribution function from anomalous diffraction multispectral extinction data and total area is generalized to the case of a variable complex refractive index m(lambda) near unity depending on spectral wavelength lambda. Inversion tests are presented for a water-haze aerosol model. An upper-phase shift limit of 5 pi/2 retrieved an accurate peak area distribution profile. Analytical corrections using both the total number and area improved the inversion.

  9. Protocol for Tier 2 Evaluation of Vapor Intrusion at Corrective Action Sites

    DTIC Science & Technology

    2012-07-01

    622.92 600.12 437.08 433.44 411.1 Sulfur Hexafluoride (SF6) by NIOSH 6602 Modified Sulfur Hexafluoride 600 130 380 290 120 370 Notes: 1. VOC and SF6...6602 Modified Sulfur Hexafluoride 2400 2600 24 1500 260 14 18 1000 Notes: 1. VOC and SF6 samples were analyzed by Columbia Analytical Services, Inc. in...NIOSH 6602 Modified Sulfur Hexafluoride 3900 15 1800 1700 24 1600 Notes: 1. VOC and SF6 samples were analyzed by Columbia Analytical Services, Inc. in

  10. Depth-resolved analytical model and correction algorithm for photothermal optical coherence tomography

    PubMed Central

    Lapierre-Landry, Maryse; Tucker-Schwartz, Jason M.; Skala, Melissa C.

    2016-01-01

    Photothermal OCT (PT-OCT) is an emerging molecular imaging technique that occupies a spatial imaging regime between microscopy and whole body imaging. PT-OCT would benefit from a theoretical model to optimize imaging parameters and test image processing algorithms. We propose the first analytical PT-OCT model to replicate an experimental A-scan in homogeneous and layered samples. We also propose the PT-CLEAN algorithm to reduce phase-accumulation and shadowing, two artifacts found in PT-OCT images, and demonstrate it on phantoms and in vivo mouse tumors. PMID:27446693

  11. Internal character dictates transition dynamics between isolation and cohesive grouping

    NASA Astrophysics Data System (ADS)

    Manrique, Pedro D.; Hui, Pak Ming; Johnson, Neil F.

    2015-12-01

    We show that accounting for internal character among interacting heterogeneous entities generates rich transition behavior between isolation and cohesive dynamical grouping. Our analytical and numerical calculations reveal different critical points arising for different character-dependent grouping mechanisms. These critical points move in opposite directions as the population's diversity decreases. Our analytical theory may help explain why a particular class of universality is so common in the real world, despite the fundamental differences in the underlying entities. It also correctly predicts the nonmonotonic temporal variation in connectivity observed recently in one such system.

  12. Analytic study of the effect of dark energy-dark matter interaction on the growth of structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcondes, Rafael J.F.; Landim, Ricardo C.G.; Costa, André A.

    2016-12-01

    Large-scale structure has been shown as a promising cosmic probe for distinguishing and constraining dark energy models. Using the growth index parametrization, we obtain an analytic formula for the growth rate of structures in a coupled dark energy model in which the exchange of energy-momentum is proportional to the dark energy density. We find that the evolution of f σ{sub 8} can be determined analytically once we know the coupling, the dark energy equation of state, the present value of the dark energy density parameter and the current mean amplitude of dark matter fluctuations. After correcting the growth function formore » the correspondence with the velocity field through the continuity equation in the interacting model, we use our analytic result to compare the model's predictions with large-scale structure observations.« less

  13. Analytical approximations for spiral waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Löber, Jakob, E-mail: jakob@physik.tu-berlin.de; Engel, Harald

    2013-12-15

    We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +}more » with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.« less

  14. Providing solid angle formalism for skyshine calculations

    PubMed Central

    Pahikkala, A. Jussi; Rising, Mary B.; McGinley, Patton H.

    2010-01-01

    We detail, derive and correct the technical use of the solid angle variable identified in formal guidance that relates skyshine calculations to dose‐equivalent rate. We further recommend it for use with all National Council on Radiation Protection and Measurements (NCRP), Institute of Physics and Engineering in Medicine (IPEM) and similar reports documented. In general, for beams of identical width which have different resulting areas, within ±1.0% maximum deviation the analytical pyramidal solution is 1.27 times greater than a misapplied analytical conical solution through all field sizes up to 40×40 cm2. Therefore, we recommend determining the exact results with the analytical pyramidal solution for square beams and the analytical conical solution for circular beams. PACS number(s): 87.52.‐g, 87.52.Df, 87.52.Tr, 87.53.‐j, 87.53.Bn, 87.53.Dq, 87.66.‐a, 89., 89.60.+x

  15. Analytical multiple scattering correction to the Mie theory: Application to the analysis of the lidar signal

    NASA Technical Reports Server (NTRS)

    Flesia, C.; Schwendimann, P.

    1992-01-01

    The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.

  16. An analytic, approximate method for modeling steady, three-dimensional flow to partially penetrating wells

    NASA Astrophysics Data System (ADS)

    Bakker, Mark

    2001-05-01

    An analytic, approximate solution is derived for the modeling of three-dimensional flow to partially penetrating wells. The solution is written in terms of a correction on the solution for a fully penetrating well and is obtained by dividing the aquifer up, locally, in a number of aquifer layers. The resulting system of differential equations is solved by application of the theory for multiaquifer flow. The presented approach has three major benefits. First, the solution may be applied to any groundwater model that can simulate flow to a fully penetrating well; the solution may be superimposed onto the solution for the fully penetrating well to simulate the local three-dimensional drawdown and flow field. Second, the approach is applicable to isotropic, anisotropic, and stratified aquifers and to both confined and unconfined flow. Third, the solution extends over a small area around the well only; outside this area the three-dimensional effect of the partially penetrating well is negligible, and no correction to the fully penetrating well is needed. A number of comparisons are made to existing three-dimensional, analytic solutions, including radial confined and unconfined flow and a well in a uniform flow field. It is shown that a subdivision in three layers is accurate for many practical cases; very accurate solutions are obtained with more layers.

  17. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    PubMed

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Unified analytic representation of physical sputtering yield

    NASA Astrophysics Data System (ADS)

    Janev, R. K.; Ralchenko, Yu. V.; Kenmotsu, T.; Hosaka, K.

    2001-03-01

    Generalized energy parameter η= η( ɛ, δ) and normalized sputtering yield Ỹ(η) , where ɛ= E/ ETF and δ= Eth/ ETF, are introduced to achieve a unified representation of all available experimental and sputtering data at normal ion incidence. The sputtering data in the new Ỹ(η) representation retain their original uncertainties. The Ỹ(η) data can be fitted to a simple three-parameter analytic expression with an rms deviation of 32%, well within the uncertainties of original data. Both η and Ỹ(η) have correct physical behavior in the threshold and high-energy regions. The available theoretical data produced by the TRIM.SP code can also be represented by the same single analytic function Ỹ(η) with a similar accuracy.

  19. Comparison of analysis and flight test data for a drone aircraft with active flutter suppression

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Pototzky, A. S.

    1981-01-01

    A drone aircraft equipped with an active flutter suppression system is considered with emphasis on the comparison of modal dampings and frequencies as a function of Mach number. Results are presented for both symmetric and antisymmetric motion with flutter suppression off. Only symmetric results are given for flutter suppression on. Frequency response functions of the vehicle are presented from both flight test data and analysis. The analysis correlation is improved by using an empirical aerodynamic correction factor which is proportional to the ratio of experimental to analytical steady-state lift curve slope. The mathematical models are included and existing analytical techniques are described as well as an alternative analytical technique for obtaining closed-loop results.

  20. "Racial bias in mock juror decision-making: A meta-analytic review of defendant treatment": Correction to Mitchell et al. (2005).

    PubMed

    2017-06-01

    Reports an error in "Racial Bias in Mock Juror Decision-Making: A Meta-Analytic Review of Defendant Treatment" by Tara L. Mitchell, Ryann M. Haw, Jeffrey E. Pfeifer and Christian A. Meissner ( Law and Human Behavior , 2005[Dec], Vol 29[6], 621-637). In the article, all of the numbers in Appendix A were correct, but the signs were reversed for z' in a number of studies, which are listed. Also, in Appendix B, some values were incorrect, some signs were reversed, and some values were missing. The corrected appendix is included. (The following abstract of the original article appeared in record 2006-00971-001.) Common wisdom seems to suggest that racial bias, defined as disparate treatment of minority defendants, exists in jury decision-making, with Black defendants being treated more harshly by jurors than White defendants. The empirical research, however, is inconsistent--some studies show racial bias while others do not. Two previous meta-analyses have found conflicting results regarding the existence of racial bias in juror decision-making (Mazzella & Feingold, 1994, Journal of Applied Social Psychology, 24, 1315-1344; Sweeney & Haney, 1992, Behavioral Sciences and the Law, 10, 179-195). This research takes a meta-analytic approach to further investigate the inconsistencies within the empirical literature on racial bias in juror decision-making by defining racial bias as disparate treatment of racial out-groups (rather than focusing upon the minority group alone). Our results suggest that a small, yet significant, effect of racial bias in decision-making is present across studies, but that the effect becomes more pronounced when certain moderators are considered. The state of the research will be discussed in light of these findings. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Detector response function of an energy-resolved CdTe single photon counting detector.

    PubMed

    Liu, Xin; Lee, Hyoung Koo

    2014-01-01

    While spectral CT using single photon counting detector has shown a number of advantages in diagnostic imaging, knowledge of the detector response function of an energy-resolved detector is needed to correct the signal bias and reconstruct the image more accurately. The objective of this paper is to study the photo counting detector response function using laboratory sources, and investigate the signal bias correction method. Our approach is to model the detector response function over the entire diagnostic energy range (20 keV

  2. Developing an Analytical Framework: Incorporating Ecosystem Services into Decision Making - Proceedings of a Workshop

    USGS Publications Warehouse

    Hogan, Dianna; Arthaud, Greg; Pattison, Malka; Sayre, Roger G.; Shapiro, Carl

    2010-01-01

    The analytical framework for understanding ecosystem services in conservation, resource management, and development decisions is multidisciplinary, encompassing a combination of the natural and social sciences. This report summarizes a workshop on 'Developing an Analytical Framework: Incorporating Ecosystem Services into Decision Making,' which focused on the analytical process and on identifying research priorities for assessing ecosystem services, their production and use, their spatial and temporal characteristics, their relationship with natural systems, and their interdependencies. Attendees discussed research directions and solutions to key challenges in developing the analytical framework. The discussion was divided into two sessions: (1) the measurement framework: quantities and values, and (2) the spatial framework: mapping and spatial relationships. This workshop was the second of three preconference workshops associated with ACES 2008 (A Conference on Ecosystem Services): Using Science for Decision Making in Dynamic Systems. These three workshops were designed to explore the ACES 2008 theme on decision making and how the concept of ecosystem services can be more effectively incorporated into conservation, restoration, resource management, and development decisions. Preconference workshop 1, 'Developing a Vision: Incorporating Ecosystem Services into Decision Making,' was held on April 15, 2008, in Cambridge, MA. In preconference workshop 1, participants addressed what would have to happen to make ecosystem services be used more routinely and effectively in conservation, restoration, resource management, and development decisions, and they identified some key challenges in developing the analytical framework. Preconference workshop 3, 'Developing an Institutional Framework: Incorporating Ecosystem Services into Decision Making,' was held on October 30, 2008, in Albuquerque, NM; participants examined the relationship between the institutional framework and the use of ecosystem services in decision making.

  3. Corrective Action Investigation Plan for Corrective Action Unit 165: Areas 25 and 26 Dry Well and Washdown Areas, Nevada Test Site, Nevada (including Record of Technical Change Nos. 1, 2, and 3) (January 2002, Rev. 0)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 165 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 165 consists of eight Corrective Action Sites (CASs): CAS 25-20-01, Lab Drain Dry Well; CAS 25-51-02, Dry Well; CAS 25-59-01, Septic System; CAS 26-59-01, Septic System; CAS 25-07-06, Train Decontamination Area; CAS 25-07-07, Vehicle Washdown; CAS 26-07-01, Vehicle Washdown Station; and CAS 25-47-01, Reservoir and French Drain. All eight CASsmore » are located in the Nevada Test Site, Nevada. Six of these CASs are located in Area 25 facilities and two CASs are located in Area 26 facilities. The eight CASs at CAU 165 consist of dry wells, septic systems, decontamination pads, and a reservoir. The six CASs in Area 25 are associated with the Nuclear Rocket Development Station that operated from 1958 to 1973. The two CASs in Area 26 are associated with facilities constructed for Project Pluto, a series of nuclear reactor tests conducted between 1961 to 1964 to develop a nuclear-powered ramjet engine. Based on site history, the scope of this plan will be a two-phased approach to investigate the possible presence of hazardous and/or radioactive constituents at concentrations that could potentially pose a threat to human health and the environment. The Phase I analytical program for most CASs will include volatile organic compounds, semivolatile organic compounds, Resource Conservation and Recovery Act metals, total petroleum hydrocarbons, polychlorinated biphenyls, and radionuclides. If laboratory data obtained from the Phase I investigation indicates the presence of contaminants of concern, the process will continue with a Phase II investigation to define the extent of contamination. Based on the results of Phase I sampling, the analytical program for Phase II investigation may be reduced. The results of this field investigation will support a defensible evaluation of corrective action alternatives in the corrective action decision document.« less

  4. Diffusive Cosmic-Ray Acceleration at Shock Waves of Arbitrary Speed with Magnetostatic Turbulence. I. General Theory and Correct Nonrelativistic Speed Limit

    NASA Astrophysics Data System (ADS)

    Schlickeiser, R.; Oppotsch, J.

    2017-12-01

    The analytical theory of diffusive acceleration of cosmic rays at parallel stationary shock waves of arbitrary speed with magnetostatic turbulence is developed from first principles. The theory is based on the diffusion approximation to the gyrotropic cosmic-ray particle phase-space distribution functions in the respective rest frames of the up- and downstream medium. We derive the correct cosmic-ray jump conditions for the cosmic-ray current and density, and match the up- and downstream distribution functions at the position of the shock. It is essential to account for the different particle momentum coordinates in the up- and downstream media. Analytical expressions for the momentum spectra of shock-accelerated cosmic rays are calculated. These are valid for arbitrary shock speeds including relativistic shocks. The correctly taken limit for nonrelativistic shock speeds leads to a universal broken power-law momentum spectrum of accelerated particles with velocities well above the injection velocity threshold, where the universal power-law spectral index q≃ 2-{γ }1-4 is independent of the flow compression ratio r. For nonrelativistic shock speeds, we calculate for the first time the injection velocity threshold, settling the long-standing injection problem for nonrelativistic shock acceleration.

  5. Fast ray-tracing of human eye optics on Graphics Processing Units.

    PubMed

    Wei, Qi; Patkar, Saket; Pai, Dinesh K

    2014-05-01

    We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Transformational Leadership and Organizational Citizenship Behavior: A Meta-Analytic Test of Underlying Mechanisms.

    PubMed

    Nohe, Christoph; Hertel, Guido

    2017-01-01

    Based on social exchange theory, we examined and contrasted attitudinal mediators (affective organizational commitment, job satisfaction) and relational mediators (trust in leader, leader-member exchange; LMX) of the positive relationship between transformational leadership and organizational citizenship behavior (OCB). Hypotheses were tested using meta-analytic path models with correlations from published meta-analyses (761 samples with 227,419 individuals overall). When testing single-mediator models, results supported our expectations that each of the mediators explained the relationship between transformational leadership and OCB. When testing a multi-mediator model, LMX was the strongest mediator. When testing a model with a latent attitudinal mechanism and a latent relational mechanism, the relational mechanism was the stronger mediator of the relationship between transformational leadership and OCB. Our findings help to better understand the underlying mechanisms of the relationship between transformational leadership and OCB.

  7. Review of Pre-Analytical Errors in Oral Glucose Tolerance Testing in a Tertiary Care Hospital.

    PubMed

    Nanda, Rachita; Patel, Suprava; Sahoo, Sibashish; Mohapatra, Eli

    2018-03-13

    The pre-pre-analytical and pre-analytical phases form a major chunk of the errors in a laboratory. The process has taken into consideration a very common procedure which is the oral glucose tolerance test to identify the pre-pre-analytical errors. Quality indicators provide evidence of quality, support accountability and help in the decision making of laboratory personnel. The aim of this research is to evaluate pre-analytical performance of the oral glucose tolerance test procedure. An observational study that was conducted overa period of three months, in the phlebotomy and accessioning unit of our laboratory using questionnaire that examined the pre-pre-analytical errors through a scoring system. The pre-analytical phase was analyzed for each sample collected as per seven quality indicators. About 25% of the population gave wrong answer with regard to the question that tested the knowledge of patient preparation. The appropriateness of test result QI-1 had the most error. Although QI-5 for sample collection had a low error rate, it is a very important indicator as any wrongly collected sample can alter the test result. Evaluating the pre-analytical and pre-pre-analytical phase is essential and must be conducted routinely on a yearly basis to identify errors and take corrective action and to facilitate their gradual introduction into routine practice.

  8. Corrective Action Decision Document/Closure Report for Corrective Action Unit 567: Miscellaneous Soil Sites - Nevada National Security Site, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, Patrick

    2014-12-01

    This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 567: Miscellaneous Soil Sites, Nevada National Security Site, Nevada. The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 567 based on the implementation of the corrective actions. The corrective actions implemented at CAU 567 were developed based on an evaluation of analytical data from the CAI, the assumed presence of COCs at specific locations, and the detailed and comparative analysis of the CAAs. The CAAs weremore » selected on technical merit focusing on performance, reliability, feasibility, safety, and cost. The implemented corrective actions meet all requirements for the technical components evaluated. The CAAs meet all applicable federal and state regulations for closure of the site. Based on the implementation of these corrective actions, the DOE, National Nuclear Security Administration Nevada Field Office provides the following recommendations: • No further corrective actions are necessary for CAU 567. • The Nevada Division of Environmental Protection issue a Notice of Completion to the DOE, National Nuclear Security Administration Nevada Field Office for closure of CAU 567. • CAU 567 be moved from Appendix III to Appendix IV of the FFACO.« less

  9. An Excel Solver Exercise to Introduce Nonlinear Regression

    ERIC Educational Resources Information Center

    Pinder, Jonathan P.

    2013-01-01

    Business students taking business analytics courses that have significant predictive modeling components, such as marketing research, data mining, forecasting, and advanced financial modeling, are introduced to nonlinear regression using application software that is a "black box" to the students. Thus, although correct models are…

  10. Analytic Methods for Adjusting Subjective Rating Schemes.

    ERIC Educational Resources Information Center

    Cooper, Richard V. L.; Nelson, Gary R.

    Statistical and econometric techniques of correcting for supervisor bias in models of individual performance appraisal were developed, using a variant of the classical linear regression model. Location bias occurs when individual performance is systematically overestimated or underestimated, while scale bias results when raters either exaggerate…

  11. Long Term Mean Local Time of the Ascending Node Prediction

    NASA Technical Reports Server (NTRS)

    McKinley, David P.

    2007-01-01

    Significant error has been observed in the long term prediction of the Mean Local Time of the Ascending Node on the Aqua spacecraft. This error of approximately 90 seconds over a two year prediction is a complication in planning and timing of maneuvers for all members of the Earth Observing System Afternoon Constellation, which use Aqua's MLTAN as the reference for their inclination maneuvers. It was determined that the source of the prediction error was the lack of a solid Earth tide model in the operational force models. The Love Model of the solid Earth tide potential was used to derive analytic corrections to the inclination and right ascension of the ascending node of Aqua's Sun-synchronous orbit. Additionally, it was determined that the resonance between the Sun and orbit plane of the Sun-synchronous orbit is the primary driver of this error. The analytic corrections have been added to the operational force models for the Aqua spacecraft reducing the two-year 90-second error to less than 7 seconds.

  12. Analytical model for viscous damping and the spring force for perforated planar microstructures acting at both audible and ultrasonic frequencies

    PubMed Central

    Homentcovschi, Dorel; Miles, Ronald N.

    2008-01-01

    The paper presents a model for the squeezed film damping, the resistance of the holes, and the corresponding spring forces for a periodic perforated microstructure including the effects of compressibility, inertia, and rarefied gas. The viscous damping and spring forces are obtained by using the continuity equation. The analytical formula for the squeezed film damping is applied to analyze the response of an ultrasonic transducer. The inclusion of these effects in a model significantly improves the agreement with measured results. Finally, it is shown that the frequency dependence of the total damping and total spring force for a cell are very similar to those corresponding to a rectangular open microstructure without holes. A separate analysis reveals the importance of each particular correction. The most important is the compressibility correction; the inertia has to be considered only for determining the spring force and the damping force for sufficiently high frequencies. PMID:18646964

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afzal, Muhammad U., E-mail: muhammad.afzal@mq.edu.au; Esselle, Karu P.

    This paper presents a quasi-analytical technique to design a continuous, all-dielectric phase correcting structures (PCSs) for circularly polarized Fabry-Perot resonator antennas (FPRAs). The PCS has been realized by varying the thickness of a rotationally symmetric dielectric block placed above the antenna. A global analytical expression is derived for the PCS thickness profile, which is required to achieve nearly uniform phase distribution at the output of the PCS, despite the non-uniform phase distribution at its input. An alternative piecewise technique based on spline interpolation is also explored to design a PCS. It is shown from both far- and near-field results thatmore » a PCS tremendously improves the radiation performance of the FPRA. These improvements include an increase in peak directivity from 22 to 120 (from 13.4 dBic to 20.8 dBic) and a decrease of 3 dB beamwidth from 41.5° to 15°. The phase-corrected antenna also has a good directivity bandwidth of 1.3 GHz, which is 11% of the center frequency.« less

  14. Subtracting infrared renormalons from Wilson coefficients: Uniqueness and power dependences on ΛQCD

    NASA Astrophysics Data System (ADS)

    Mishima, Go; Sumino, Yukinari; Takaura, Hiromasa

    2017-06-01

    In the context of operator product expansion (OPE) and using the large-β0 approximation, we propose a method to define Wilson coefficients free from uncertainties due to IR renormalons. We first introduce a general observable X (Q2) with an explicit IR cutoff, and then we extract a genuine UV contribution XUV as a cutoff-independent part. XUV includes power corrections ˜(ΛQCD2/Q2)n which are independent of renormalons. Using the integration-by-regions method, we observe that XUV coincides with the leading Wilson coefficient in OPE and also clarify that the power corrections originate from UV region. We examine scheme dependence of XUV and single out a specific scheme favorable in terms of analytical properties. Our method would be optimal with respect to systematicity, analyticity and stability. We test our formulation with the examples of the Adler function, QCD force between Q Q ¯, and R -ratio in e+e- collision.

  15. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  16. An analytical framework for whole-genome sequence association studies and its implications for autism spectrum disorder.

    PubMed

    Werling, Donna M; Brand, Harrison; An, Joon-Yong; Stone, Matthew R; Zhu, Lingxue; Glessner, Joseph T; Collins, Ryan L; Dong, Shan; Layer, Ryan M; Markenscoff-Papadimitriou, Eirene; Farrell, Andrew; Schwartz, Grace B; Wang, Harold Z; Currall, Benjamin B; Zhao, Xuefang; Dea, Jeanselle; Duhn, Clif; Erdman, Carolyn A; Gilson, Michael C; Yadav, Rachita; Handsaker, Robert E; Kashin, Seva; Klei, Lambertus; Mandell, Jeffrey D; Nowakowski, Tomasz J; Liu, Yuwen; Pochareddy, Sirisha; Smith, Louw; Walker, Michael F; Waterman, Matthew J; He, Xin; Kriegstein, Arnold R; Rubenstein, John L; Sestan, Nenad; McCarroll, Steven A; Neale, Benjamin M; Coon, Hilary; Willsey, A Jeremy; Buxbaum, Joseph D; Daly, Mark J; State, Matthew W; Quinlan, Aaron R; Marth, Gabor T; Roeder, Kathryn; Devlin, Bernie; Talkowski, Michael E; Sanders, Stephan J

    2018-05-01

    Genomic association studies of common or rare protein-coding variation have established robust statistical approaches to account for multiple testing. Here we present a comparable framework to evaluate rare and de novo noncoding single-nucleotide variants, insertion/deletions, and all classes of structural variation from whole-genome sequencing (WGS). Integrating genomic annotations at the level of nucleotides, genes, and regulatory regions, we define 51,801 annotation categories. Analyses of 519 autism spectrum disorder families did not identify association with any categories after correction for 4,123 effective tests. Without appropriate correction, biologically plausible associations are observed in both cases and controls. Despite excluding previously identified gene-disrupting mutations, coding regions still exhibited the strongest associations. Thus, in autism, the contribution of de novo noncoding variation is probably modest in comparison to that of de novo coding variants. Robust results from future WGS studies will require large cohorts and comprehensive analytical strategies that consider the substantial multiple-testing burden.

  17. Corrective Action Decision Document/Closure Report for Corrective Action Unit 274: Septic Systems, Nevada Test Site, Nevada, Rev. No.: 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant Evenson

    2006-09-01

    This Corrective Action Decision Document/Closure Report has been prepared for Corrective Action Unit 274, Septic Systems, Nevada Test Site (NTS), Nevada in accordance with the ''Federal Facility Agreement and Consent Order'' (1996). Corrective Action Unit (CAU) 274 is comprised of five corrective action sites (CASs): (1) CAS 03-02-01, WX-6 ETS Building Septic System; (2) CAS 06-02-01, Cesspool; (3) CAS 09-01-01, Spill Site; (4) CAS 09-05-01, Leaching Pit; and (5) CAS 20-05-01, Septic System. The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the closure of CAU 274 with no further corrective action. Tomore » achieve this, corrective action investigation (CAI) activities were performed from November 14 through December 17, 2005 as set forth in the CAU 274 Corrective Action Investigation Plan. The purpose of the CAI was to fulfill the following data needs as defined during the data quality objective (DQO) process: (1) Determine whether contaminants of concern (COCs) are present. (2) If contaminants of concern are present, determine their nature and extent. (3) Provide sufficient information and data to complete appropriate corrective actions. The CAU 274 dataset from the investigation results was evaluated based on the data quality indicator parameters. This evaluation demonstrated the quality and acceptability of the dataset for use in fulfilling the DQO data needs. Analytes detected during the CAI were evaluated against final action levels (FALs) established in this document. No analytes were detected at concentrations exceeding the FALs. No COCs have been released to the soil at CAU 274, and corrective action is not required. Therefore, the DQO data needs were met, and it was determined that no corrective action based on risk to human receptors is necessary for the site. All FALs were calculated using the industrial site worker scenario except for benzo(a)pyrene, which was calculated based on the occasional use scenario. Benzo(a)pyrene was detected above the preliminary action level at CAS 20-05-01; however, it was not identified as a COC because the concentration was below the FAL. As a best management practice and to ensure that future site workers are not exposed to this site contaminant for more than this decision-basis exposure duration, an administrative use restriction was established around the leachfield at CAS 20-05-01. In addition, the removal of the septic tanks and septic tank contents at CASs 03-02-01, 06-02-01, and 20-05-01 was performed.« less

  18. Electroweak radiative corrections for polarized Moller scattering at the future 11 GeV JLab experiment

    DOE PAGES

    Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; ...

    2010-11-19

    We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e – e – → e – e – (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.

  19. Structural design and analysis for an ultra low CTE optical bench for the Hubble Space Telescope corrective optics

    NASA Technical Reports Server (NTRS)

    Neam, Douglas C.; Gerber, John D.

    1992-01-01

    The stringent stability requirements of the Corrective Optics Space Telescope Axial Replacement (COSTAR) necessitates a Deployable Optical Bench (DOB) with both a low CTE and high resonant frequency. The DOB design consists of a monocoque thin shell structure which marries metallic machined parts with graphite epoxy formed structure. Structural analysis of the DOB has been integrated into the laminate design and optimization process. Also, the structural analytical results are compared with vibration and thermal test data to assess the reliability of the analysis.

  20. Reliability of IGBT in a STATCOM for Harmonic Compensation and Power Factor Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopi Reddy, Lakshmi Reddy; Tolbert, Leon M; Ozpineci, Burak

    With smart grid integration, there is a need to characterize reliability of a power system by including reliability of power semiconductors in grid related applications. In this paper, the reliability of IGBTs in a STATCOM application is presented for two different applications, power factor correction and harmonic elimination. The STATCOM model is developed in EMTP, and analytical equations for average conduction losses in an IGBT and a diode are derived and compared with experimental data. A commonly used reliability model is used to predict reliability of IGBT.

  1. Frequency Response of Pressure Sensitive Paints

    NASA Technical Reports Server (NTRS)

    Winslow, Neal A.; Carroll, Bruce F.; Setzer, Fred M.

    1996-01-01

    An experimental method for measuring the frequency response of Pressure Sensitive Paints (PSP) is presented. These results lead to the development of a dynamic correction technique for PSP measurements which is of great importance to the advancement of PSP as a measurement technique. The ability to design such a dynamic corrector is most easily formed from the frequency response of the given system. An example of this correction technique is shown. In addition to the experimental data, an analytical model for the frequency response is developed from the one dimensional mass diffusion equation.

  2. A table of polyatomic interferences in ICP-MS

    USGS Publications Warehouse

    May, Thomas W.; Wiedmeyer, Ray H.

    1998-01-01

    Spectroscopic interferences are probably the largest class of interferences in ICP-MS and are caused by atomic or molecular ions that have the same mass-to-charge as analytes of interest. Current ICP-MS instrumental software corrects for all known atomic “isobaric” interferences, or those caused by overlapping isotopes of different elements, but does not correct for most polyatomic interferences. Such interferences are caused by polyatomic ions that are formed from precursors having numerous sources, such as the sample matrix, reagents used for preparation, plasma gases, and entrained atmospheric gases.

  3. A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations

    NASA Technical Reports Server (NTRS)

    Ghosh, Amitabha

    1997-01-01

    This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunnel. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally some results of the current investigation are presented.

  4. A Study of Convergence of the PMARC Matrices Applicable to WICS Calculations

    NASA Technical Reports Server (NTRS)

    Ghosh, Amitabha

    1997-01-01

    This report discusses some analytical procedures to enhance the real time solutions of PMARC matrices applicable to the Wall Interference Correction Scheme (WICS) currently being implemented at the 12 foot Pressure Tunell. WICS calculations involve solving large linear systems in a reasonably speedy manner necessitating exploring further improvement in solution time. This paper therefore presents some of the associated theory of the solution of linear systems. Then it discusses a geometrical interpretation of the residual correction schemes. Finally, some results of the current investigation are presented.

  5. The time delay in strong gravitational lensing with Gauss-Bonnet correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jingyun; Cheng, Hongbo, E-mail: jingyunman@mail.ecust.edu.cn, E-mail: hbcheng@ecust.edu.cn

    2014-11-01

    The time delay between two relativistic images in the strong gravitational lensing governed by Gauss-Bonnet gravity is studied. We make a complete analytical derivation of the expression of time delay in presence of Gauss-Bonnet coupling. With respect to Schwarzschild, the time delay decreases as a consequence of the shrinking of the photon sphere. As the coupling increases, the second term in the time delay expansion becomes more relevant. Thus time delay in strong limit encodes some new information about geometry in five-dimensional spacetime with Gauss-Bonnet correction.

  6. Towards universal potentials for (H2)2 and isotopic variants: post-Born-Oppenheimer contributions.

    PubMed

    Diniz, Leonardo G; Mohallem, José R

    2008-06-07

    Adiabatic corrections are evaluated for the interaction of two hydrogen molecules (H(2))(2) and isotopic variants. Their contribution to the cluster formation amount up to 10% of the interaction energy. Added to the best ab initio Born-Oppenheimer isotropic potential, they correct especially its short range repulsive part. Calculations of second virial coefficients are improved in general, with an impressive agreement with experiments for gaseous D(2) in a large range of temperatures. The potentials are available in both analytical and numerical forms.

  7. Higher-Order Binding Corrections to the Lamb Shift

    NASA Astrophysics Data System (ADS)

    Pachucki, K.

    1993-08-01

    In this work a new analytical method for calculating the one-loop self-energy correction to the Lamb shift is presented in detail. The technique relies on division into the low and the high energy parts. The low energy part is calculated using the multipole expansion and the high energy part is calculated by expanding the Dirac-Coulomb propagator in powers of the Coulomb field. The obtained results are in agreement with those previously known, but are more accurate. A new theoretical value of the Lamb shift is also given.

  8. Improvement of scattering correction for in situ coastal and inland water absorption measurement using exponential fitting approach

    NASA Astrophysics Data System (ADS)

    Ye, Huping; Li, Junsheng; Zhu, Jianhua; Shen, Qian; Li, Tongji; Zhang, Fangfang; Yue, Huanyin; Zhang, Bing; Liao, Xiaohan

    2017-10-01

    The absorption coefficient of water is an important bio-optical parameter for water optics and water color remote sensing. However, scattering correction is essential to obtain accurate absorption coefficient values in situ using the nine-wavelength absorption and attenuation meter AC9. Establishing the correction always fails in Case 2 water when the correction assumes zero absorption in the near-infrared (NIR) region and underestimates the absorption coefficient in the red region, which affect processes such as semi-analytical remote sensing inversion. In this study, the scattering contribution was evaluated by an exponential fitting approach using AC9 measurements at seven wavelengths (412, 440, 488, 510, 532, 555, and 715 nm) and by applying scattering correction. The correction was applied to representative in situ data of moderately turbid coastal water, highly turbid coastal water, eutrophic inland water, and turbid inland water. The results suggest that the absorption levels in the red and NIR regions are significantly higher than those obtained using standard scattering error correction procedures. Knowledge of the deviation between this method and the commonly used scattering correction methods will facilitate the evaluation of the effect on satellite remote sensing of water constituents and general optical research using different scattering-correction methods.

  9. A simulation-based analytic model of radio galaxies

    NASA Astrophysics Data System (ADS)

    Hardcastle, M. J.

    2018-04-01

    I derive and discuss a simple semi-analytical model of the evolution of powerful radio galaxies which is not based on assumptions of self-similar growth, but rather implements some insights about the dynamics and energetics of these systems derived from numerical simulations, and can be applied to arbitrary pressure/density profiles of the host environment. The model can qualitatively and quantitatively reproduce the source dynamics and synchrotron light curves derived from numerical modelling. Approximate corrections for radiative and adiabatic losses allow it to predict the evolution of radio spectral index and of inverse-Compton emission both for active and `remnant' sources after the jet has turned off. Code to implement the model is publicly available. Using a standard model with a light relativistic (electron-positron) jet, subequipartition magnetic fields, and a range of realistic group/cluster environments, I simulate populations of sources and show that the model can reproduce the range of properties of powerful radio sources as well as observed trends in the relationship between jet power and radio luminosity, and predicts their dependence on redshift and environment. I show that the distribution of source lifetimes has a significant effect on both the source length distribution and the fraction of remnant sources expected in observations, and so can in principle be constrained by observations. The remnant fraction is expected to be low even at low redshift and low observing frequency due to the rapid luminosity evolution of remnants, and to tend rapidly to zero at high redshift due to inverse-Compton losses.

  10. Information-geometric measures as robust estimators of connection strengths and external inputs.

    PubMed

    Tatsuno, Masami; Fellous, Jean-Marc; Amari, Shun-Ichi

    2009-08-01

    Information geometry has been suggested to provide a powerful tool for analyzing multineuronal spike trains. Among several advantages of this approach, a significant property is the close link between information-geometric measures and neural network architectures. Previous modeling studies established that the first- and second-order information-geometric measures corresponded to the number of external inputs and the connection strengths of the network, respectively. This relationship was, however, limited to a symmetrically connected network, and the number of neurons used in the parameter estimation of the log-linear model needed to be known. Recently, simulation studies of biophysical model neurons have suggested that information geometry can estimate the relative change of connection strengths and external inputs even with asymmetric connections. Inspired by these studies, we analytically investigated the link between the information-geometric measures and the neural network structure with asymmetrically connected networks of N neurons. We focused on the information-geometric measures of orders one and two, which can be derived from the two-neuron log-linear model, because unlike higher-order measures, they can be easily estimated experimentally. Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, we analytically showed that the corrected first- and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively. These results suggest that information-geometric measures provide useful insights into the neural network architecture and that they will contribute to the study of system-level neuroscience.

  11. The Impact of Traumatic Brain Injury on Prison Health Services and Offender Management.

    PubMed

    Piccolino, Adam L; Solberg, Kenneth B

    2014-07-01

    A large percentage of incarcerated offenders report a history of traumatic brain injury (TBI) with concomitant neuropsychiatric and social sequelae. However, research looking at the relationship between TBI and delivery of correctional health services and offender management is limited. In this study, the relationships between TBI and use of correctional medical/psychological services, chemical dependency (CD) treatment completion rates, in-prison rule infractions, and recidivism were investigated. Findings indicated that TBI history has a statistically significant association with increased usage of correctional medical/psychological services, including crisis interventions services, and with higher recidivism rates. Results also showed a trend toward offenders with TBI incurring higher rates of in-prison rule infractions and lower rates of CD treatment completion. Implications and future directions for correctional systems are discussed. © The Author(s) 2014.

  12. Isotopic fractionation studies of uranium and plutonium using porous ion emitters as thermal ionization mass spectrometry sources

    DOE PAGES

    Baruzzini, Matthew L.; Hall, Howard L.; Spencer, Khalil J.; ...

    2018-04-22

    Investigations of the isotope fractionation behaviors of plutonium and uranium reference standards were conducted employing platinum and rhenium (Pt/Re) porous ion emitter (PIE) sources, a relatively new thermal ionization mass spectrometry (TIMS) ion source strategy. The suitability of commonly employed, empirically developed mass bias correction laws (i.e., the Linear, Power, and Russell's laws) for correcting such isotope ratio data was also determined. Corrected plutonium isotope ratio data, regardless of mass bias correction strategy, were statistically identical to that of the certificate, however, the process of isotope fractionation behavior of plutonium using the adopted experimental conditions was determined to be bestmore » described by the Power law. Finally, the fractionation behavior of uranium, using the analytical conditions described herein, is also most suitably modeled using the Power law, though Russell's and the Linear law for mass bias correction rendered results that were identical, within uncertainty, to the certificate value.« less

  13. Isotopic fractionation studies of uranium and plutonium using porous ion emitters as thermal ionization mass spectrometry sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baruzzini, Matthew L.; Hall, Howard L.; Spencer, Khalil J.

    Investigations of the isotope fractionation behaviors of plutonium and uranium reference standards were conducted employing platinum and rhenium (Pt/Re) porous ion emitter (PIE) sources, a relatively new thermal ionization mass spectrometry (TIMS) ion source strategy. The suitability of commonly employed, empirically developed mass bias correction laws (i.e., the Linear, Power, and Russell's laws) for correcting such isotope ratio data was also determined. Corrected plutonium isotope ratio data, regardless of mass bias correction strategy, were statistically identical to that of the certificate, however, the process of isotope fractionation behavior of plutonium using the adopted experimental conditions was determined to be bestmore » described by the Power law. Finally, the fractionation behavior of uranium, using the analytical conditions described herein, is also most suitably modeled using the Power law, though Russell's and the Linear law for mass bias correction rendered results that were identical, within uncertainty, to the certificate value.« less

  14. An Automated Baseline Correction Method Based on Iterative Morphological Operations.

    PubMed

    Chen, Yunliang; Dai, Liankui

    2018-05-01

    Raman spectra usually suffer from baseline drift caused by fluorescence or other reasons. Therefore, baseline correction is a necessary and crucial step that must be performed before subsequent processing and analysis of Raman spectra. An automated baseline correction method based on iterative morphological operations is proposed in this work. The method can adaptively determine the structuring element first and then gradually remove the spectral peaks during iteration to get an estimated baseline. Experiments on simulated data and real-world Raman data show that the proposed method is accurate, fast, and flexible for handling different kinds of baselines in various practical situations. The comparison of the proposed method with some state-of-the-art baseline correction methods demonstrates its advantages over the existing methods in terms of accuracy, adaptability, and flexibility. Although only Raman spectra are investigated in this paper, the proposed method is hopefully to be used for the baseline correction of other analytical instrumental signals, such as IR spectra and chromatograms.

  15. On readout of vibrational qubits using quantum beats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shyshlov, Dmytro; Babikov, Dmitri, E-mail: Dmitri.Babikov@mu.edu; Berrios, Eduardo

    2014-12-14

    Readout of the final states of qubits is a crucial step towards implementing quantum computation in experiment. Although not scalable to large numbers of qubits per molecule, computational studies show that molecular vibrations could provide a significant (factor 2–5 in the literature) increase in the number of qubits compared to two-level systems. In this theoretical work, we explore the process of readout from vibrational qubits in thiophosgene molecule, SCCl{sub 2}, using quantum beat oscillations. The quantum beats are measured by first exciting the superposition of the qubit-encoding vibrational states to the electronically excited readout state with variable time-delay pulses. Themore » resulting oscillation of population of the readout state is then detected as a function of time delay. In principle, fitting the quantum beat signal by an analytical expression should allow extracting the values of probability amplitudes and the relative phases of the vibrational qubit states. However, we found that if this procedure is implemented using the standard analytic expression for quantum beats, a non-negligible phase error is obtained. We discuss the origin and properties of this phase error, and propose a new analytical expression to correct the phase error. The corrected expression fits the quantum beat signal very accurately, which may permit reading out the final state of vibrational qubits in experiments by combining the analytic fitting expression with numerical modelling of the readout process. The new expression is also useful as a simple model for fitting any quantum beat experiments where more accurate phase information is desired.« less

  16. Neutron Capture and the Antineutrino Yield from Nuclear Reactors.

    PubMed

    Huber, Patrick; Jaffke, Patrick

    2016-03-25

    We identify a new, flux-dependent correction to the antineutrino spectrum as produced in nuclear reactors. The abundance of certain nuclides, whose decay chains produce antineutrinos above the threshold for inverse beta decay, has a nonlinear dependence on the neutron flux, unlike the vast majority of antineutrino producing nuclides, whose decay rate is directly related to the fission rate. We have identified four of these so-called nonlinear nuclides and determined that they result in an antineutrino excess at low energies below 3.2 MeV, dependent on the reactor thermal neutron flux. We develop an analytic model for the size of the correction and compare it to the results of detailed reactor simulations for various real existing reactors, spanning 3 orders of magnitude in neutron flux. In a typical pressurized water reactor the resulting correction can reach ∼0.9% of the low energy flux which is comparable in size to other, known low-energy corrections from spent nuclear fuel and the nonequilibrium correction. For naval reactors the nonlinear correction may reach the 5% level by the end of cycle.

  17. Closure Report for Corrective Action Unit 356: Mud Pits and Disposal Sites, Nevada Test Site, Nevada with Errata Sheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NNSA /NV

    2002-11-12

    This Closure Report (CR) has been prepared for Corrective Action Unit (CAU) 356, Mud Pits and Disposal Sites, in accordance with the Federal Facility Agreement and Consent Order. This CAU is located in Areas 3 and 20 of the Nevada Test Site (NTS) approximately 65 miles northwest of Las Vegas, Nevada. Corrective Action Unit 356 consists of seven Corrective Action Sites (CASs): 03-04-01, Area 3 Change House Septic System; 03-09-01, Mud Pit Spill Over; 03-09-03, Mud Pit; 03-09-04, Mud Pit; 03-09-05, Mud Pit; 20-16-01, Landfill; and 20-22-21, Drums. This CR identifies and rationalizes the U.S. Department of Energy (DOE), Nationalmore » Nuclear Security Administration Nevada Operations Office's (NNSA/NV's) recommendation that no further corrective action and closure in place is deemed necessary for CAU 356. This recommendation is based on the results of field investigation/closure activities conducted November 20, 2001, through January 3, 2002, and March 11 to 14, 2002. These activities were conducted in accordance with the Streamlined Approach for Environmental Restoration Plan (SAFER) for CAU 356. For CASs 03-09-01, 03-09-03, 20-16-01, and 22-20-21, analytes detected in soil during the corrective action investigation were evaluated against Preliminary Action Levels (PALs) and it was determined that no Contaminants of Concern (COCs) were present. Therefore, no further action is necessary for the soil at these CASs. For CASs 03-04-01, 03-09-04, and 03-09-05, analytes detected in soil during the corrective action investigation were evaluated against PALs and identifies total petroleum hydrocarbons (TPHs) and radionuclides (i.e., americium-241 and/or plutonium 239/240) as COCs. The nature, extent, and concentration of the TPH and radionuclide COCs were bounded by sampling and shown to be relatively immobile. Therefore, closure in place is recommended for these CASs in CAU 356. Further, use restrictions are not required at this CAU beyond the NTS use restrictions identified in the SAFER Plan. In addition, the septic tank associated with CAU 356 will be closed in accordance with applicable regulations.« less

  18. Design of general apochromatic drift-quadrupole beam lines

    NASA Astrophysics Data System (ADS)

    Lindstrøm, C. A.; Adli, E.

    2016-07-01

    Chromatic errors are normally corrected using sextupoles in regions of large dispersion. In low emittance linear accelerators, use of sextupoles can be challenging. Apochromatic focusing is a lesser-known alternative approach, whereby chromatic errors of Twiss parameters are corrected without the use of sextupoles, and has consequently been subject to renewed interest in advanced linear accelerator research. Proof of principle designs were first established by Montague and Ruggiero and developed more recently by Balandin et al. We describe a general method for designing drift-quadrupole beam lines of arbitrary order in apochromatic correction, including analytic expressions for emittance growth and other merit functions. Worked examples are shown for plasma wakefield accelerator staging optics and for a simple final focus system.

  19. Time delay of critical images in the vicinity of cusp point of gravitational-lens systems

    NASA Astrophysics Data System (ADS)

    Alexandrov, A.; Zhdanov, V.

    2016-12-01

    We consider approximate analytical formulas for time-delays of critical images of a point source in the neighborhood of a cusp-caustic. We discuss zero, first and second approximations in powers of a parameter that defines the proximity of the source to the cusp. These formulas link the time delay with characteristics of the lens potential. The formula of zero approximation was obtained by Congdon, Keeton & Nordgren (MNRAS, 2008). In case of a general lens potential we derived first order correction thereto. If the potential is symmetric with respect to the cusp axis, then this correction is identically equal to zero. For this case, we obtained second order correction. The relations found are illustrated by a simple model example.

  20. Oxygen isotope corrections for online δ34S analysis

    USGS Publications Warehouse

    Fry, B.; Silva, S.R.; Kendall, C.; Anderson, R.K.

    2002-01-01

    Elemental analyzers have been successfully coupled to stable-isotope-ratio mass spectrometers for online measurements of the δ34S isotopic composition of plants, animals and soils. We found that the online technology for automated δ34S isotopic determinations did not yield reproducible oxygen isotopic compositions in the SO2 produced, and as a result calculated δ34S values were often 1–3‰ too high versus their correct values, particularly for plant and animal samples with high C/S ratio. Here we provide empirical and analytical methods for correcting the S isotope values for oxygen isotope variations, and further detail a new SO2-SiO2 buffering method that minimizes detrimental oxygen isotope variations in SO2.

  1. Lock-in amplifier error prediction and correction in frequency sweep measurements.

    PubMed

    Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose

    2007-01-01

    This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.

  2. Report: New analytical and statistical approaches for interpreting the relationships among environmental stressors and biomarkers

    EPA Science Inventory

    The broad topic of biomarker research has an often-overlooked component: the documentation and interpretation of the surrounding chemical environment and other meta-data, especially from visualization, analytical, and statistical perspectives (Pleil et al. 2014; Sobus et al. 2011...

  3. Adlerian and Analytic Theory: A Case Presentation.

    ERIC Educational Resources Information Center

    Myers, Kathleen M.; Croake, James W.

    1984-01-01

    Makes a theoretical comparison between Adlerian and analytic formulations of family assessment in a case study involving a recently divorced couple and a child with encopresis. Discussed the family relationship in terms of object relations theory emphasizing intrapsychic experience, and Adlerian theory emphasizing the purposes of behavior. (JAC)

  4. Chilled to the bone: embodied countertransference and unspoken traumatic memories.

    PubMed

    Zoppi, Luisa

    2017-11-01

    Starting from a deeply challenging experience of early embodied countertransference in a first encounter with a new patient, the author explores the issues it raised. Such moments highlight projective identification as well as what Stone (2006) has described as 'embodied resonance in the countertransference'. In these powerful experiences linear time and subject boundaries are altered, and this leads to central questions about analytic work. As well as discussing the uncanny experience at the very beginning of an analytic encounter and its challenges for the analytic field, the author considers 'the time horizon of analytic process' (Hogenson ), the relationship between 'moments of complexity and analytic boundaries' (Cambray ) and the role of mirror neurons in intersubjective experience. © 2017, The Society of Analytical Psychology.

  5. Bearing the unbearable: ancestral transmission through dreams and moving metaphors in the analtyic field.

    PubMed

    Pickering, Judith

    2012-11-01

    This paper explores how untold and unresolved intergenerational trauma may be transmitted through unconscious channels of communication, manifesting in the dreams of descendants. Unwitting carriers for that which was too horrific for their ancestors to bear, descendants may enter analysis through an unconscious need to uncover past secrets, piece together ancestral histories before the keys to comprehending their terrible inheritance die with their forebears. They seek the relational containment of the analytic relationship to provide psychological conditions to bear the unbearable, know the unknowable, speak the unspeakable and redeem the unredeemable. In the case of 'Rachael', initial dreams gave rise to what Hobson (1984) called 'moving metaphors of self' in the analytic field. Dream imagery, projective and introjective processes in the transference-countertransference dynamics gradually revealed an unknown ancestral history. I clarify the back and forth process from dream to waking dream thoughts to moving metaphors and differentiate the moving metaphor from a living symbol. I argue that the containment of the analytic relationship nested within the security of the analytic space is a necessary precondition for such healing processes to occur. © 2012, The Society of Analytical Psychology.

  6. The role of critical ethnic awareness and social support in the discrimination-depression relationship among Asian Americans: path analysis.

    PubMed

    Kim, Isok

    2014-01-01

    This study used a path analytic technique to examine associations among critical ethnic awareness, racial discrimination, social support, and depressive symptoms. Using a convenience sample from online survey of Asian American adults (N = 405), the study tested 2 main hypotheses: First, based on the empowerment theory, critical ethnic awareness would be positively associated with racial discrimination experience; and second, based on the social support deterioration model, social support would partially mediate the relationship between racial discrimination and depressive symptoms. The result of the path analysis model showed that the proposed path model was a good fit based on global fit indices, χ²(2) = 4.70, p = .10; root mean square error of approximation = 0.06; comparative fit index = 0.97; Tucker-Lewis index = 0.92; and standardized root mean square residual = 0.03. The examinations of study hypotheses demonstrated that critical ethnic awareness was directly associated (b = .11, p < .05) with the racial discrimination experience, whereas social support had a significant indirect effect (b = .48; bias-corrected 95% confidence interval [0.02, 1.26]) between the racial discrimination experience and depressive symptoms. The proposed path model illustrated that both critical ethnic awareness and social support are important mechanisms for explaining the relationship between racial discrimination and depressive symptoms among this sample of Asian Americans. This study highlights the usefulness of the critical ethnic awareness concept as a way to better understand how Asian Americans might perceive and recognize racial discrimination experiences in relation to its mental health consequences.

  7. Gas transfer velocities in lakes measured with SF6

    NASA Astrophysics Data System (ADS)

    Upstill-Goddard, R. C.; Watson, A. J.; Liss, P. S.; Liddicoat, M. I.

    1990-09-01

    The experimentally-determined relationships between air-water gas transfer velocity and windspeed are presented for two small, rapidly wind mixed lakes in upland SW England. High-precision estimates of the gas transfer velocity, k, with daily resolution, were derived by monitoring the rate of evasion from the lakes of added sulphur hexafluoride, SF6, an inert, sparingly soluble, man-made gaseous tracer. Corresponding data on in situ wind speeds and directions, and surface water temperatures were automatically logged as a time series of 4min averages, using a battery-powered device. The results significantly extend the existing field database and show a strong dependence of k, normalized to CO2 at 20°C, on windspeed in the range ~ 2 13m s-1, corrected to a height of 10m. No correlation was found between k and wind direction. The data are fitted with two least-squares straight lines which intersect at a windspeed of 9.5±3m/s (at z= 10m), beyond which significant steepening of the k vs. windspeed relationship implies a transition from the "rough surface" to "breaking wave" regime, in broad agreement with previous conclusions. Nevertheless, the data scatter about the fitted lines exceeds that which would be predicted from the associated analytical uncertainties. This implies the observed relationships between k and windspeed are not unique and therefore that additional factors must be important in determining k.

  8. COMMENT ON "PERCHLORATE IDENTIFICATION IN FERTILIZERS" AND THE SUBSEQUENT ADDITION/CORRECTION [LETTER TO EDITOR

    EPA Science Inventory

    Perchlorate contamination has been reported in several fertilizer materials and not just in mined Chile saltpeter, where it is a welo-known natural impurity. To survey fertilizers for perchlorate, two analytical techniques have been applied to 45 products that span agricultural, ...

  9. ANALYSES OF FISH TISSUE BY VACUUM DISTILLATION/GAS CHROMATOGRAPHY/MASS SPECTROMETRY

    EPA Science Inventory

    The analyses of fish tissue using VD/GC/MS with surrogate-based matrix corrections is described. Techniques for equilibrating surrogate and analyte spikes with a tissue matrix are presented, and equilibrated spiked samples are used to document method performance. The removal of a...

  10. Electromagnetic corrections to the hadronic vacuum polarization of the photon within QEDL and QEDM

    NASA Astrophysics Data System (ADS)

    Bussone, Andrea; Della Morte, Michele; Janowski, Tadeusz

    2018-03-01

    We compute the leading QED corrections to the hadronic vacuum polarization (HVP) of the photon, relevant for the determination of leptonic anomalous magnetic moments, al. We work in the electroquenched approximation and use dynamical QCD configurations generated by the CLS initiative with two degenerate flavors of nonperturbatively O(a)-improved Wilson fermions. We consider QEDL and QEDM to deal with the finite-volume zero modes. We compare results for the Wilson loops with exact analytical determinations. In addition we make sure that the volumes and photon masses used in QEDM are such that the correct dispersion relation is reproduced by the energy levels extracted from the charged pions two-point functions. Finally we compare results for pion masses and the HVP between QEDL and QEDM. For the vacuum polarization, corrections with respect to the pure QCD case, at fixed pion masses, turn out to be at the percent level.

  11. High-order corrections on the laser cooling limit in the Lamb-Dicke regime.

    PubMed

    Yi, Zhen; Gu, Wen-Ju

    2017-01-23

    We investigate corrections on the cooling limit of high-order Lamb-Dicke (LD) parameters in the double electromagnetically induced transparency (EIT) cooling scheme. Via utilizing quantum interferences, the single-phonon heating mechanism vanishes and the system evolves to a double dark state, from which we will obtain the mechanical occupation on the single-phonon excitation state. In addition, the further correction induced by two-phonon heating transitions is included to achieve a more accurate cooling limit. There exist two pathways of two-phonon heating transitions: direct two-phonon excitation from the dark state and further excitation from the single-phonon excited state. By adding up these two parts of correction, the obtained analytical predictions show a well consistence with numerical results. Moreover, we find that the two pathways can destructively interfere with each other, leading to the elimination of two-phonon heating transitions and achieving a lower cooling limit.

  12. Mechanics of additively manufactured porous biomaterials based on the rhombicuboctahedron unit cell.

    PubMed

    Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A

    2016-01-01

    Thanks to recent developments in additive manufacturing techniques, it is now possible to fabricate porous biomaterials with arbitrarily complex micro-architectures. Micro-architectures of such biomaterials determine their physical and biological properties, meaning that one could potentially improve the performance of such biomaterials through rational design of micro-architecture. The relationship between the micro-architecture of porous biomaterials and their physical and biological properties has therefore received increasing attention recently. In this paper, we studied the mechanical properties of porous biomaterials made from a relatively unexplored unit cell, namely rhombicuboctahedron. We derived analytical relationships that relate the micro-architecture of such porous biomaterials, i.e. the dimensions of the rhombicuboctahedron unit cell, to their elastic modulus, Poisson's ratio, and yield stress. Finite element models were also developed to validate the analytical solutions. Analytical and numerical results were compared with experimental data from one of our recent studies. It was found that analytical solutions and numerical results show a very good agreement particularly for smaller values of apparent density. The elastic moduli predicted by analytical and numerical models were in very good agreement with experimental observations too. While in excellent agreement with each other, analytical and numerical models somewhat over-predicted the yield stress of the porous structures as compared to experimental data. As the ratio of the vertical struts to the inclined struts, α, approaches zero and infinity, the rhombicuboctahedron unit cell respectively approaches the octahedron (or truncated cube) and cube unit cells. For those limits, the analytical solutions presented here were found to approach the analytic solutions obtained for the octahedron, truncated cube, and cube unit cells, meaning that the presented solutions are generalizations of the analytical solutions obtained for several other types of porous biomaterials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. The power-proportion method for intracranial volume correction in volumetric imaging analysis.

    PubMed

    Liu, Dawei; Johnson, Hans J; Long, Jeffrey D; Magnotta, Vincent A; Paulsen, Jane S

    2014-01-01

    In volumetric brain imaging analysis, volumes of brain structures are typically assumed to be proportional or linearly related to intracranial volume (ICV). However, evidence abounds that many brain structures have power law relationships with ICV. To take this relationship into account in volumetric imaging analysis, we propose a power law based method-the power-proportion method-for ICV correction. The performance of the new method is demonstrated using data from the PREDICT-HD study.

  14. The Association between Perceptions of Distributive Justice and Procedural Justice with Support of Treatment and Support of Punishment among Correctional Staff

    ERIC Educational Resources Information Center

    Lambert, Eric G.; Hogan, Nancy L.; Barton-Bellessa, Shannon M.

    2011-01-01

    Previous literature exploring the relationship between correctional officer orientations toward treatment and punishment is inconsistent at best. One rarely studied aspect is the influence of distributive and procedural justice on correctional staff support for treatment and punishment. For this study, ordinary least squares regression analysis of…

  15. Representations of mechanical assembly sequences

    NASA Technical Reports Server (NTRS)

    Homem De Mello, Luiz S.; Sanderson, Arthur C.

    1991-01-01

    Five types of representations for assembly sequences are reviewed: the directed graph of feasible assembly sequences, the AND/OR graph of feasible assembly sequences, the set of establishment conditions, and two types of sets of precedence relationships. (precedence relationships between the establishment of one connection between parts and the establishment of another connection, and precedence relationships between the establishment of one connection and states of the assembly process). The mappings of one representation into the others are established. The correctness and completeness of these representations are established. The results presented are needed in the proof of correctness and completeness of algorithms for the generation of mechanical assembly sequences.

  16. Dynamic response of gold nanoparticle chemiresistors to organic analytes in aqueous solution.

    PubMed

    Müller, Karl-Heinz; Chow, Edith; Wieczorek, Lech; Raguse, Burkhard; Cooper, James S; Hubble, Lee J

    2011-10-28

    We investigate the response dynamics of 1-hexanethiol-functionalized gold nanoparticle chemiresistors exposed to the analyte octane in aqueous solution. The dynamic response is studied as a function of the analyte-water flow velocity, the thickness of the gold nanoparticle film and the analyte concentration. A theoretical model for analyte limited mass-transport is used to model the analyte diffusion into the film, the partitioning of the analyte into the 1-hexanethiol capping layers and the subsequent swelling of the film. The degree of swelling is then used to calculate the increase of the electron tunnel resistance between adjacent nanoparticles which determines the resistance change of the film. In particular, the effect of the nonlinear relationship between resistance and swelling on the dynamic response is investigated at high analyte concentration. Good agreement between experiment and the theoretical model is achieved. This journal is © the Owner Societies 2011

  17. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory; determination of selected carbamate pesticides in water by high-performance liquid chromatography

    USGS Publications Warehouse

    Werner, S.L.; Johnson, S.M.

    1994-01-01

    As part of its primary responsibility concerning water as a national resource, the U.S. Geological Survey collects and analyzes samples of ground water and surface water to determine water quality. This report describes the method used since June 1987 to determine selected total-recoverable carbamate pesticides present in water samples. High- performance liquid chromatography is used to separate N-methyl carbamates, N-methyl carbamoyloximes, and an N-phenyl carbamate which have been extracted from water and concentrated in dichloromethane. Analytes, surrogate compounds, and reference compounds are eluted from the analytical column within 25 minutes. Two modes of analyte detection are used: (1) a photodiode-array detector measures and records ultraviolet-absorbance profiles, and (2) a fluorescence detector measures and records fluorescence from an analyte derivative produced when analyte hydrolysis is combined with chemical derivatization. Analytes are identified and confirmed in a three-stage process by use of chromatographic retention time, ultraviolet (UV) spectral comparison, and derivatization/fluorescence detection. Quantitative results are based on the integration of single-wavelength UV-absorbance chromatograms and on comparison with calibration curves derived from external analyte standards that are run with samples as part of an instrumental analytical sequence. Estimated method detection limits vary for each analyte, depending on the sample matrix conditions, and range from 0.5 microgram per liter to as low as 0.01 microgram per liter. Reporting levels for all analytes have been set at 0.5 microgram per liter for this method. Corrections on the basis of percentage recoveries of analytes spiked into distilled water are not applied to values calculated for analyte concentration in samples. These values for analyte concentrations instead indicate the quantities recovered by the method from a particular sample matrix.

  18. Precision and bias of selected analytes reported by the National Atmospheric Deposition Program and National Trends Network, 1983; and January 1980 through September 1984

    USGS Publications Warehouse

    Schroder, L.J.; Bricker, A.W.; Willoughby, T.C.

    1985-01-01

    Blind-audit samples with known analyte concentrations have been prepared by the U.S. Geological Survey and distributed to the National Atmospheric Deposition Program 's Central Analytical Laboratory. The difference between the National Atmospheric Deposition Program and National Trends Network reported analyte concentrations and known analyte concentrations have been calculated, and the bias has been determined. Calcium, magnesium , sodium, and chloride were biased at the 99-percent confidence limit; potassium and sulfate were unbiased at the 99-percent confidence limit, for 1983 results. Relative-percent differences between the measured and known analyte concentration for calcium , magnesium, sodium, potassium, chloride, and sulfate have been calculated for 1983. The median relative percent difference for calcium was 17.0; magnesium was 6.4; sodium was 10.8; potassium was 6.4; chloride was 17.2; and sulfate was -5.3. These relative percent differences should be used to correct the 1983 data before user-analysis of the data. Variances have been calculated for calcium, magnesium, sodium, potassium, chloride, and sulfate determinations. These variances should be applicable to natural-sample analyte concentrations reported by the National Atmospheric Deposition Program and National Trends Network for calendar year 1983. (USGS)

  19. Relativistic algorithm for time transfer in Mars missions under IAU Resolutions: an analytic approach

    NASA Astrophysics Data System (ADS)

    Pan, Jun-Yang; Xie, Yi

    2015-02-01

    With tremendous advances in modern techniques, Einstein's general relativity has become an inevitable part of deep space missions. We investigate the relativistic algorithm for time transfer between the proper time τ of the onboard clock and the Geocentric Coordinate Time, which extends some previous works by including the effects of propagation of electromagnetic signals. In order to evaluate the implicit algebraic equations and integrals in the model, we take an analytic approach to work out their approximate values. This analytic model might be used in an onboard computer because of its limited capability to perform calculations. Taking an orbiter like Yinghuo-1 as an example, we find that the contributions of the Sun, the ground station and the spacecraft dominate the outcomes of the relativistic corrections to the model.

  20. Quantitative phase imaging method based on an analytical nonparaxial partially coherent phase optical transfer function.

    PubMed

    Bao, Yijun; Gaylord, Thomas K

    2016-11-01

    Multifilter phase imaging with partially coherent light (MFPI-PC) is a promising new quantitative phase imaging method. However, the existing MFPI-PC method is based on the paraxial approximation. In the present work, an analytical nonparaxial partially coherent phase optical transfer function is derived. This enables the MFPI-PC to be extended to the realistic nonparaxial case. Simulations over a wide range of test phase objects as well as experimental measurements on a microlens array verify higher levels of imaging accuracy compared to the paraxial method. Unlike the paraxial version, the nonparaxial MFPI-PC with obliquity factor correction exhibits no systematic error. In addition, due to its analytical expression, the increase in computation time compared to the paraxial version is negligible.

  1. An explicit analytical solution for sound propagation in a three-dimensional penetrable wedge with small apex angle.

    PubMed

    Petrov, Pavel S; Sturm, Frédéric

    2016-03-01

    A problem of sound propagation in a shallow-water waveguide with a weakly sloping penetrable bottom is considered. The adiabatic mode parabolic equations are used to approximate the solution of the three-dimensional (3D) Helmholtz equation by modal decomposition of the acoustic pressure field. The mode amplitudes satisfy parabolic equations that admit analytical solutions in the special case of the 3D wedge. Using the analytical formula for modal amplitudes, an explicit and remarkably simple expression for the acoustic pressure in the wedge is obtained. The proposed solution is validated by the comparison with a solution of the 3D penetrable wedge problem obtained using a fully 3D parabolic equation that includes a leading-order cross term correction.

  2. Mechanical Properties of Additively Manufactured Thick Honeycombs.

    PubMed

    Hedayati, Reza; Sadighi, Mojtaba; Mohammadi Aghdam, Mohammad; Zadpoor, Amir Abbas

    2016-07-23

    Honeycombs resemble the structure of a number of natural and biological materials such as cancellous bone, wood, and cork. Thick honeycomb could be also used for energy absorption applications. Moreover, studying the mechanical behavior of honeycombs under in-plane loading could help understanding the mechanical behavior of more complex 3D tessellated structures such as porous biomaterials. In this paper, we study the mechanical behavior of thick honeycombs made using additive manufacturing techniques that allow for fabrication of honeycombs with arbitrary and precisely controlled thickness. Thick honeycombs with different wall thicknesses were produced from polylactic acid (PLA) using fused deposition modelling, i.e., an additive manufacturing technique. The samples were mechanically tested in-plane under compression to determine their mechanical properties. We also obtained exact analytical solutions for the stiffness matrix of thick hexagonal honeycombs using both Euler-Bernoulli and Timoshenko beam theories. The stiffness matrix was then used to derive analytical relationships that describe the elastic modulus, yield stress, and Poisson's ratio of thick honeycombs. Finite element models were also built for computational analysis of the mechanical behavior of thick honeycombs under compression. The mechanical properties obtained using our analytical relationships were compared with experimental observations and computational results as well as with analytical solutions available in the literature. It was found that the analytical solutions presented here are in good agreement with experimental and computational results even for very thick honeycombs, whereas the analytical solutions available in the literature show a large deviation from experimental observation, computational results, and our analytical solutions.

  3. Transformational Leadership and Organizational Citizenship Behavior: A Meta-Analytic Test of Underlying Mechanisms

    PubMed Central

    Nohe, Christoph; Hertel, Guido

    2017-01-01

    Based on social exchange theory, we examined and contrasted attitudinal mediators (affective organizational commitment, job satisfaction) and relational mediators (trust in leader, leader-member exchange; LMX) of the positive relationship between transformational leadership and organizational citizenship behavior (OCB). Hypotheses were tested using meta-analytic path models with correlations from published meta-analyses (761 samples with 227,419 individuals overall). When testing single-mediator models, results supported our expectations that each of the mediators explained the relationship between transformational leadership and OCB. When testing a multi-mediator model, LMX was the strongest mediator. When testing a model with a latent attitudinal mechanism and a latent relational mechanism, the relational mechanism was the stronger mediator of the relationship between transformational leadership and OCB. Our findings help to better understand the underlying mechanisms of the relationship between transformational leadership and OCB. PMID:28848478

  4. Applying Behavior Analytic Procedures to Effectively Teach Literacy Skills in the Classroom

    ERIC Educational Resources Information Center

    Joseph, Laurice M.; Alber-Morgan, Sheila; Neef, Nancy

    2016-01-01

    The purpose of this article is to discuss the application of behavior analytic procedures for advancing and evaluating methods for teaching literacy skills in the classroom. Particularly, applied behavior analysis has contributed substantially to examining the relationship between teacher behavior and student literacy performance. Teacher…

  5. Interpersonal Mindfulness Informed by Functional Analytic Psychotherapy: Findings from a Pilot Randomized Trial

    ERIC Educational Resources Information Center

    Bowen, Sarah; Haworth, Kevin; Grow, Joel; Tsai, Mavis; Kohlenberg, Robert

    2012-01-01

    Functional Analytic Psychotherapy (FAP; Kohlenberg & Tsai, 1991) aims to improve interpersonal relationships through skills intended to increase closeness and connection. The current trial assessed a brief mindfulness-based intervention informed by FAP, in which an interpersonal element was added to a traditional intrapersonal mindfulness…

  6. Resilience: A Meta-Analytic Approach

    ERIC Educational Resources Information Center

    Lee, Ji Hee; Nam, Suk Kyung; Kim, A-Reum; Kim, Boram; Lee, Min Young; Lee, Sang Min

    2013-01-01

    This study investigated the relationship between psychological resilience and its relevant variables by using a meta-analytic method. The results indicated that the largest effect on resilience was found to stem from the protective factors, a medium effect from risk factors, and the smallest effect from demographic factors. (Contains 4 tables.)

  7. Analytic and Heuristic Processing Influences on Adolescent Reasoning and Decision-Making.

    ERIC Educational Resources Information Center

    Klaczynski, Paul A.

    2001-01-01

    Examined the relationship between age and the normative/descriptive gap--the discrepancy between actual reasoning and traditional standards for reasoning. Found that middle adolescents performed closer to normative ideals than early adolescents. Factor analyses suggested that performance was based on two processing systems, analytic and heuristic…

  8. Promoting Efficacy Research on Functional Analytic Psychotherapy

    ERIC Educational Resources Information Center

    Maitland, Daniel W. M.; Gaynor, Scott T.

    2012-01-01

    Functional Analytic Psychotherapy (FAP) is a form of therapy grounded in behavioral principles that utilizes therapist reactions to shape target behavior. Despite a growing literature base, there is a paucity of research to establish the efficacy of FAP. As a general approach to psychotherapy, and how the therapeutic relationship produces change,…

  9. Migraine and its relationship with dietary habits in women

    PubMed Central

    Nazari, Fatemeh; Eghbali, Maryam

    2012-01-01

    Background: Migraine is defined as a chronic disabling condition which influences all physical, mental, and social dimensions of quality of life. Some 12-15% of the world population suffers from migraine. The disease is more common among women. The onset, frequency, duration, and severity of migraine attacks may be affected by other predisposing factors including nutrition. Therefore, determining these factors can greatly assist in identification and development of its prevention. Considering the importance of nutrition in maintaining and promoting health and preventing diseases, the present study was conducted to determine the relationship between headaches and nutritional habits (frequency and type of consumed foods) of women suffering from migraine. Materials and Methods: This analytical case-control study was conducted on 170 women (in two groups of 85) selected by convenient sampling for the case group and random sampling for the control group. Data collection tool was a 3-section questionnaire including personal information, headache features, and nutritional habits. The questionnaire was completed in an interview performed by the researcher. The data was then analyzed in SPSS using descriptive statistical tests (frequency distribution, mean, and standard deviation) and inferential tests (chi-square, independent t, Mann-Whitney, and Spearman’s correlation tests). Findings: The results demonstrated a significant relationship between headache and some food items including proteins, carbohydrates, fat, fruits and vegetables. To be more precise, there were significant relationships between headaches and the frequency of consumption of red meat (p = 0.01), white meat (p = 0.002), cereals (p = 0.0005), vegetables (p = 0.009), fruits (p = 0.0005), salad dressing (p = 0.03), and eggs (p = 0.001). Moreover, a significant relationship existed between headache and type of consumed oil, meat, dairy products, fruits, and vegetables (p < 0.05). Conclusions: It is necessary to put more emphasis on the significance of correcting dietary patterns in order to prevent headache attacks and reduce the complications arising from drug consumption in migraine patients. Social and economical efficiency of the patients will thus be enhanced. PMID:23833603

  10. Modeling and analysis of a novel planar eddy current damper

    NASA Astrophysics Data System (ADS)

    Zhang, He; Kou, Baoquan; Jin, Yinxi; Zhang, Lu; Zhang, Hailin; Li, Liyi

    2014-05-01

    In this paper, a novel 2-DOF permanent magnet planar eddy current damper is proposed, of which the stator is made of a copper plate and the mover is composed of two orthogonal 1-D permanent magnet arrays with a double sided structure. The main objective of the planar eddy current damper is to provide two orthogonal damping forces for dynamic systems like the 2-DOF high precision positioning system. Firstly, the basic structure and the operating principle of the planar damper are introduced. Secondly, the analytical model of the planar damper is established where the magnetic flux density distribution of the permanent magnet arrays is obtained by using the equivalent magnetic charge method and the image method. Then, the analytical expressions of the damping force and damping coefficient are derived. Lastly, to verify the analytical model, the finite element method (FEM) is adopted for calculating the flux density and a planar damper prototype is manufactured and thoroughly tested. The results from FEM and experiments are in good agreement with the ones from the analytical expressions indicating that the analytical model is reasonable and correct.

  11. The Relationship Between Magnet Designation, Electronic Health Record Adoption, and Medicare Meaningful Use Payments.

    PubMed

    Lippincott, Christine; Foronda, Cynthia; Zdanowicz, Martin; McCabe, Brian E; Ambrosia, Todd

    2017-08-01

    The objective of this study was to examine the relationship between nursing excellence and electronic health record adoption. Of 6582 US hospitals, 4939 were eligible for the Medicare Electronic Health Record Incentive Program, and 6419 were eligible for evaluation on the HIMSS Analytics Electronic Medical Record Adoption Model. Of 399 Magnet hospitals, 330 were eligible for the Medicare Electronic Health Record Incentive Program, and 393 were eligible for evaluation in the HIMSS Analytics Electronic Medical Record Adoption Model. Meaningful use attestation was defined as receipt of a Medicare Electronic Health Record Incentive Program payment. The adoption electronic health record was defined as Level 6 and/or 7 on the HIMSS Analytics Electronic Medical Record Adoption Model. Logistic regression showed that Magnet-designated hospitals were more likely attest to Meaningful Use than non-Magnet hospitals (odds ratio = 3.58, P < .001) and were more likely to adopt electronic health records than non-Magnet hospitals (Level 6 only: odds ratio = 3.68, P < .001; Level 6 or 7: odds ratio = 4.02, P < .001). This study suggested a positive relationship between Magnet status and electronic health record use, which involves earning financial incentives for successful adoption. Continued investigation is needed to examine the relationships between the quality of nursing care, electronic health record usage, financial implications, and patient outcomes.

  12. Finite-analytic numerical solution of heat transfer in two-dimensional cavity flow

    NASA Technical Reports Server (NTRS)

    Chen, C.-J.; Naseri-Neshat, H.; Ho, K.-S.

    1981-01-01

    Heat transfer in cavity flow is numerically analyzed by a new numerical method called the finite-analytic method. The basic idea of the finite-analytic method is the incorporation of local analytic solutions in the numerical solutions of linear or nonlinear partial differential equations. In the present investigation, the local analytic solutions for temperature, stream function, and vorticity distributions are derived. When the local analytic solution is evaluated at a given nodal point, it gives an algebraic relationship between a nodal value in a subregion and its neighboring nodal points. A system of algebraic equations is solved to provide the numerical solution of the problem. The finite-analytic method is used to solve heat transfer in the cavity flow at high Reynolds number (1000) for Prandtl numbers of 0.1, 1, and 10.

  13. Human-machine interaction to disambiguate entities in unstructured text and structured datasets

    NASA Astrophysics Data System (ADS)

    Ward, Kevin; Davenport, Jack

    2017-05-01

    Creating entity network graphs is a manual, time consuming process for an intelligence analyst. Beyond the traditional big data problems of information overload, individuals are often referred to by multiple names and shifting titles as they advance in their organizations over time which quickly makes simple string or phonetic alignment methods for entities insufficient. Conversely, automated methods for relationship extraction and entity disambiguation typically produce questionable results with no way for users to vet results, correct mistakes or influence the algorithm's future results. We present an entity disambiguation tool, DRADIS, which aims to bridge the gap between human-centric and machinecentric methods. DRADIS automatically extracts entities from multi-source datasets and models them as a complex set of attributes and relationships. Entities are disambiguated across the corpus using a hierarchical model executed in Spark allowing it to scale to operational sized data. Resolution results are presented to the analyst complete with sourcing information for each mention and relationship allowing analysts to quickly vet the correctness of results as well as correct mistakes. Corrected results are used by the system to refine the underlying model allowing analysts to optimize the general model to better deal with their operational data. Providing analysts with the ability to validate and correct the model to produce a system they can trust enables them to better focus their time on producing higher quality analysis products.

  14. Predictors of Urinary Bisphenol A and Phthalate Metabolite Concentrations in Mexican Children

    PubMed Central

    Lewis, Ryan C.; Meeker, John D.; Peterson, Karen E.; Lee, Joyce M.; Pace, Gerry G.; Cantoral, Alejandra; Téllez-Rojo, Martha Maria

    2013-01-01

    Exposure to endocrine disrupting chemicals such as bisphenol A (BPA) and phthalates is prevalent among children and adolescents, but little is known regarding important sources of exposure at these sensitive life stages. In this study, we measured urinary concentrations of BPA and nine phthalate metabolites in 108 Mexican children aged 8–13 years. Associations of age, time of day, and questionnaire items on external environment, water use, and food container use with specific gravity-corrected urinary concentrations were assessed, as were questionnaire items concerning the use of 17 personal care products in the past 48-hr. As a secondary aim, third trimester urinary concentrations were measured in 99 mothers of these children, and the relationship between specific gravity-corrected urinary concentrations at these two time points was explored. After adjusting for potential confounding by other personal care product use in the past 48-hr, there were statistically significant (p <0.05) positive associations in boys for cologne/perfume use and monoethyl phthalate (MEP), mono(3-carboxypropyl) phthalate (MCPP), mono(2-ethyl-5-hydroxyhexyl) phthalate (MEHHP), and mono(2-ethyl-5-oxohexyl) phthalate (MEOHP), and in girls for colored cosmetics use and mono-n-butyl phthalate (MBP), mono(2-ethylhexyl) phthalate (MEHP), MEHHP, MEOHP, and mono(2-ethyl-5-carboxypentyl) phthalate (MECPP), conditioner use and MEP, deodorant use and MEP, and other hair products use and MBP. There was a statistically significant positive trend for the number of personal care products used in the past 48-hr and log-MEP in girls. However, there were no statistically significant associations between the analytes and the other questionnaire items and there were no strong correlations between the analytes measured during the third trimester and at 8–13 years of age. We demonstrated that personal care product use is associated with exposure to multiple phthalates in children. Due to rapid development, children may be susceptible to impacts from exposure to endocrine disrupting chemicals; thus, reduced or delayed use of certain personal care products among children may be warranted. PMID:24041567

  15. The Impact of Marketing Actions on Relationship Quality in the Higher Education Sector in Jordan

    ERIC Educational Resources Information Center

    Al-Alak, Basheer A. M.

    2006-01-01

    This field/analytical study examined the marketing actions (antecedents) and performance (consequences) of relationship quality in a higher education setting. To analyze data collected from a random sample of 271 undergraduate students at AL-Zaytoonah Private University of Jordan, the linear structural relationship (LISREL) model was used to…

  16. Examining the Early Evidence for Self-Directed Marriage and Relationship Education: A Meta-Analytic Study

    ERIC Educational Resources Information Center

    McAllister, Shelece; Duncan, Stephen F.; Hawkins, Alan J.

    2012-01-01

    This meta-analysis examines the efficacy of self-directed marriage and relationship education (MRE) programs on relationship quality and communication skills. Programs combining traditional face-to-face learning with self-directed elements are also examined, and traditional programs' effectiveness is included as a comparison point. Sixteen studies…

  17. Health Care Transformation: A Strategy Rooted in Data and Analytics.

    PubMed

    Koster, John; Stewart, Elizabeth; Kolker, Eugene

    2016-02-01

    Today's consumers purchasing any product or service are armed with information and have high expectations. They expect service providers and payers to know about their unique needs. Data-driven decisions can help organizations meet those expectations and fulfill those needs.Health care, however, is not strictly a retail relationship-the sacred trust between patient and doctor, the clinician-patient relationship, must be preserved. The opportunities and challenges created by the digitization of health care are at the crux of the most crucial strategic decisions for academic medicine. A transformational vision grounded in data and analytics must guide health care decisions and actions.In this Commentary, the authors describe three examples of the transformational force of data and analytics to improve health care in order to focus attention on academic medicine's vital role in guiding the needed changes.

  18. On analytic modeling of lunar perturbations of artificial satellites of the earth

    NASA Astrophysics Data System (ADS)

    Lane, M. T.

    1989-06-01

    Two different procedures for analytically modeling the effects of the moon's direct gravitational force on artificial earth satellites are discussed from theoretical and numerical viewpoints. One is developed using classical series expansions of inclination and eccentricity for both the satellite and the moon, and the other employs the method of averaging. Both solutions are seen to have advantages, but it is shown that while the former is more accurate in special situations, the latter is quicker and more practical for the general orbit determination problem where observed data are used to correct the orbit in near real time.

  19. Analytic theory for the selection of Saffman-Taylor fingers in the presence of thin film effects

    NASA Technical Reports Server (NTRS)

    Tanveer, S.

    1990-01-01

    The present analytic theory for the width selection of Saffman-Taylor (1958) fingers in the presence of the thin film effect establishes that, in the limit of a small capillary number and a small gap-to-width ratio, fingers whose relative width is smaller than 1/2 are possible. It is established that a fully nonlinear analysis is required for this problem in order to obtain even the correct (and rather preliminary) scaling law. The way in which the selection rule for arbitrary small capillary number is obtainable is also presented.

  20. [The taking and transport of biological samples].

    PubMed

    Kerwat, Klaus; Kerwat, Martina; Eberhart, Leopold; Wulf, Hinnerk

    2011-05-01

    The results of microbiological tests are the foundation for a targetted therapy and the basis for monitoring infections. The quality of each and every laboratory finding depends not only on an error-free analytical process. The pre-analysis handling procedures are of particular importance. They encompass all factors and influences prior to the actual analysis. These include the correct timepoint for sample taking, the packaging and the rapid transport of the material to be investigated. Errors in the pre-analytical processing are the most frequent reasons for inappropriate findings. © Georg Thieme Verlag Stuttgart · New York.

  1. Kennedy Space Center (KSC) Launch Complex 39 (LC-39) Gaseous Hydrogen (GH2) Vent Arm Behavior Prediction Model Review Technical Assessment Report

    NASA Technical Reports Server (NTRS)

    Wilson, Timmy R.; Beech, Geoffrey; Johnston, Ian

    2009-01-01

    The NESC Assessment Team reviewed a computer simulation of the LC-39 External Tank (ET) GH2 Vent Umbilical system developed by United Space Alliance (USA) for the Space Shuttle Program (SSP) and designated KSC Analytical Tool ID 451 (KSC AT-451). The team verified that the vent arm kinematics were correctly modeled, but noted that there were relevant system sensitivities. Also, the structural stiffness used in the math model varied somewhat from the analytic calculations. Results of the NESC assessment were communicated to the model developers.

  2. Modeling direct interband tunneling. II. Lower-dimensional structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Andrew, E-mail: pandrew@ucla.edu; Chui, Chi On; California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California 90095

    We investigate the applicability of the two-band Hamiltonian and the widely used Kane analytical formula to interband tunneling along unconfined directions in nanostructures. Through comparisons with k·p and tight-binding calculations and quantum transport simulations, we find that the primary correction is the change in effective band gap. For both constant fields and realistic tunnel field-effect transistors, dimensionally consistent band gap scaling of the Kane formula allows analytical and numerical device simulations to approximate non-equilibrium Green's function current characteristics without arbitrary fitting. This allows efficient first-order calibration of semiclassical models for interband tunneling in nanodevices.

  3. On the non-closure of particle backscattering coefficient in oligotrophic oceans.

    PubMed

    Lee, ZhongPing; Huot, Yannick

    2014-11-17

    Many studies have consistently found that the particle backscattering coefficient (bbp) in oligotrophic oceans estimated from remote-sensing reflectance (Rrs) using semi-analytical algorithms is higher than that from in situ measurements. This overestimation can be as high as ~300% for some oligotrophic ocean regions. Various sources potentially responsible for this discrepancy are examined. Further, after applying an empirical algorithm to correct the impact from Raman scattering, it is found that bbp from analytical inversion of Rrs is in good agreement with that from in situ measurements, and that a closure is achieved.

  4. Pitch and Harmony in Gyorgy Ligeti's "Hamburg Concerto" and "Syzygy" for String Quartet

    NASA Astrophysics Data System (ADS)

    Corey, Charles

    The analysis component of this dissertation focuses on intricate and complex pitch relationships in Gyorgy Ligeti's last work, the Hamburg Concerto. This piece uses two distinct tuning systems---twelve tone equal temperament and just intonation---throughout its seven movements. Often, these two systems are used simultaneously, creating complex harmonic relationships. This combination allows Ligeti to exploit the unique features of each system and explore their relationships to each other. Ligeti's just intonation in the Hamburg Concerto comes mainly from the five French horns, who are instructed to keep their hands out of the bell to allow the instrument to sound its exact harmonics. The horns themselves, however, are tuned to varying different fundamentals, creating a constantly changing series of just-intoned pitches anchored above an equal-tempered bass. This method of generating just-intoned intervals adds a second layer to the relationship between equal temperament and just intonation. This paper focuses on creating ways to understand this relationship, and describing the ramifications of these tunings as they unfold throughout the piece. Ligeti very carefully crafts this work in a way that creates a balance between the systems. Research done at the Paul Sacher Stiftung has uncovered a significant collection of errors in the published score. Clearing up these discrepancies allows for a much more accurate and more informed analysis. Throughout this dissertation, mistakes are corrected, and several aspects of the score are clarified. The tuning systems are described, and a likely tuning scheme for the horns is posited. (The analytical component of the dissertation delves into the many varying intervals which all fit into one interval class---a feature that is best explored when two distinct tuning systems are juxtaposed.) A language is created herein to better understand these pitch relationships that fit neither into equal temperament nor just intonation. The analysis clearly shows that very simple musical intervals turn out to be cornerstones of this piece, traceable throughout the entire Hamburg Concerto. The composition, Syzygy for string quartet, is written in just intonation. Through four movements, the relationships evoked by the titles (always groups of homonyms) are examined and illuminated.

  5. Search for and investigation of extraterrestrial forms of life

    NASA Technical Reports Server (NTRS)

    Rubin, A. B.

    1975-01-01

    Correct combinations of remote, analytic, and functional methods and measuring devices for detecting extraterrestrial life are elaborated. Considered are techniques and instruments available both on earth and aboard spacecraft and artificial planetary satellites. Emphasis is placed on the abiogenetic synthesis of organic compounds formed in photosynthesis on Mars.

  6. A comparison of radiometric correction techniques in the evaluation of the relationship between LST and NDVI in Landsat imagery.

    PubMed

    Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin

    2012-06-01

    Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.

  7. 16 CFR 660.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  8. 12 CFR 1022.41 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  9. 12 CFR 41.41 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  10. 12 CFR 571.41 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  11. 12 CFR 1022.41 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  12. 12 CFR 571.41 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  13. 12 CFR 571.41 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  14. 12 CFR 41.41 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  15. 12 CFR 222.41 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... that a furnisher provides to a consumer reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship... relationship; and (3) Identifies the appropriate consumer. (b) Direct dispute means a dispute submitted...

  16. 12 CFR 41.41 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  17. 16 CFR 660.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  18. 16 CFR 660.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  19. 12 CFR 222.41 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... that a furnisher provides to a consumer reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship... relationship; and (3) Identifies the appropriate consumer. (b) Direct dispute means a dispute submitted...

  20. 16 CFR 660.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  1. 12 CFR 571.41 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  2. 12 CFR 41.41 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  3. 12 CFR 1022.41 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate...

  4. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Keller, J.; Wallen, R.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  5. Removing the impact of water abstractions on flow duration curves

    NASA Astrophysics Data System (ADS)

    Masoero, Alessandro; Ganora, Daniele; Galeati, Giorgio; Laio, Francesco; Claps, Pierluigi

    2015-04-01

    Changes and interactions between human system and water cycle are getting increased attention in the scientific community. Commonly discharge data needed for water resources studies were collected close to urban or industrial settlements, thus in environments where the interest for surveying was not merely scientific, but also for socio-economical purposes. Working in non-natural environments we must take into account human impacts, like the one due to water intakes for irrigation or hydropower generation, while assessing the actual water availability and variability in a river. This can became an issue in alpine areas, where hydropower exploitation is heavy and it is common to have water abstraction before a gauge station. To have a gauge station downstream a water intake can be useful to survey the environmental flow release and to record the maximum flood values, which should not be affected by the water abstraction. Nevertheless with this configuration we are unable to define properly the water volumes available in the river, information crucial to assess low flows and investigate drought risk. This situation leads to a substantial difference between observed data (affected by the human impact) and natural data (as would have been without abstraction). A main issue is how to correct these impacts and restore the natural streamflow values. The most obvious and reliable solution would be to ask for abstraction data to water users, but these data are hard to collect. Usually they are not available, because not public or not even collected by the water exploiters. A solution could be to develop a rainfall-run-off model of the basin upstream the gauge station, but this approach needs a great number of data and parameters Working in a regional framework and not on single case studies, our goal is to provide a consistent estimate of the non-impacted statistics of the river (i.e. mean value, L-moments of variation and skewness). We proposed a parsimonious method, based on few easy-access parameters, of correction of the water abstraction impact. The model, based on an exponential form of the river Flow Duration Curve (FDC), allows completely analytical solutions. Hence the method can be applied extensively. This is particularly relevant when working on a general outlook on water resources (regional or basin scale), given the high number of water abstractions that should be considered. The correction method developed is based on only two hard data that can be easily found: i) the design maximum discharge of the water intake and ii) the days of exercise, between a year. Following the same correction hypothesis also the abstracted discharge statistics have been reconstructed analytically and combined with the statistics of the receiving reach, that can be different from the original one. This information can be useful when we are assessing water availability in a river network interconnected by derivation channels. The goodness of the correction method proposed is proven by the application to a case study in North-West Italy, along a second order tributary of the Po River. Flow values recorded at the river gauge station were affected, significantly, by the presence of a 5 MW hydropower plant. Knowing the amount of water abstracted daily by the power plant we are able to reconstruct, empirically, the natural discharge on the river and compare its main statistics with the ones computed analytically using the proposed correction model. An extremely low difference between empirical and analytical reconstructed mean discharge and L-moment of variation was founded. Also, the importance of the day of exercise information was highlighted. The correction proposed in this work is able to give a correct indication of the non-impacted natural streamflows characteristics, especially in alpine regions where water abstraction impact is a main issue.

  6. An "Extended Care" Community Corrections Model for Seriously Mentally Ill Offenders

    ERIC Educational Resources Information Center

    Sabbatine, Raymond

    2007-01-01

    Our system fills every bed before it is built. Supply and demand have never had a better relationship. We need to refocus our public policy upon a correctional system that heals itself by helping those it serves to heal themselves through service to others even less fortunate. Let us term this new paradigm a "Correctional Cooperative," where…

  7. Blinded evaluation of the effects of hyaluronic acid filler injections on first impressions.

    PubMed

    Dayan, Steven H; Arkins, John P; Gal, Thomas J

    2010-11-01

    Facial appearance has profound influence on the first impression that is projected to others. To determine the effects that complete correction of the nasolabial folds (NLFs) with hyaluronic acid (HA) filler has on the first impression one makes. Twenty-two subjects received injections of HA filler into the NLFs. Photographs of the face in a relaxed pose were taken at baseline, optimal correction visit, and 4 weeks after optimal correction. Three hundred four blinded evaluators completed a survey rating first impression on various measures of success for each photo. In total, 5,776 first impressions were recorded, totaling 46,208 individual assessments of first impression. Our findings indicate a significant improvement in mean first impression in the categories of dating success, attractiveness, financial success, relationship success, athletic success, and overall first impression at the optimal correction visit. At 4 weeks after the optimal correction visit, significance was observed in all categories measured: social skills, academic performance, dating success, occupational success, attractiveness, financial success, relationship success, athletic success, and overall first impression. Full correction of the NLFs with HA filler significantly and positively influences the first impression an individual projects. © 2010 by the American Society for Dermatologic Surgery, Inc.

  8. Argumentation as a Lens to Examine Student Discourse in Peer-Led Guided Inquiry for College General Chemistry

    NASA Astrophysics Data System (ADS)

    Kulatunga, Ushiri Kumarihamy

    This dissertation work entails three related studies on the investigation of Peer-Led Guided Inquiry student discourse in a General Chemistry I course through argumentation. The first study, Argumentation and participation patterns in general chemistry peer-led sessions, is focused on examining arguments and participation patterns in small student groups without peer leader intervention. The findings of this study revealed that students were mostly engaged in co-constructed arguments, that a discrepancy in the participation of the group members existed, and students were able to correct most of the incorrect claims on their own via argumentation. The second study, Exploration of peer leader verbal behaviors as they intervene with small groups in college general chemistry, examines the interactive discourse of the peer leaders and the students during peer leader intervention. The relationship between the verbal behaviors of the peer leaders and the student argumentation is explored in this study. The findings of this study demonstrated that peer leaders used an array of verbal behaviors to guide students to construct chemistry concepts, and that a relationship existed between student argument components and peer leader verbal behaviors. The third study, Use of Tolumin's Argumentation Scheme for student discourse to gain insight about guided inquiry activities in college chemistry , is focused on investigating the relationship between student arguments without peer leader intervention and the structure of published guided inquiry ChemActivities. The relationship between argumentation and the structure of the activities is explored with respect to prompts, questions, and the segmented Learning Cycle structure of the ChemActivities. Findings of this study revealed that prompts were effective in eliciting arguments, that convergent questions produced more arguments than directed questions, and that the structure of the Learning Cycle successfully scaffolded arguments. A semester of video data from two different small student groups facilitated by two different peer leaders was used for these three related studies. An analytic framework based on Toulmin's argumentation scheme was used for the argumentation analysis of the studies. This dissertation work focused on the three central elements of the peer-led classroom, students, peer leader, and the ChemActivities, illuminates effective discourse important for group learning. Overall, this dissertation work contributes to science education by providing both an analytic framework useful for investigating group processes and crucial strategies for conducting effective cooperative learning and promoting student argumentation. The findings of this dissertation work have valuable implications in the professional development of teachers specifically for group interventions in the implementation of cooperative learning reforms.

  9. Metabolomic analysis of urine samples by UHPLC-QTOF-MS: Impact of normalization strategies.

    PubMed

    Gagnebin, Yoric; Tonoli, David; Lescuyer, Pierre; Ponte, Belen; de Seigneux, Sophie; Martin, Pierre-Yves; Schappler, Julie; Boccard, Julien; Rudaz, Serge

    2017-02-22

    Among the various biological matrices used in metabolomics, urine is a biofluid of major interest because of its non-invasive collection and its availability in large quantities. However, significant sources of variability in urine metabolomics based on UHPLC-MS are related to the analytical drift and variation of the sample concentration, thus requiring normalization. A sequential normalization strategy was developed to remove these detrimental effects, including: (i) pre-acquisition sample normalization by individual dilution factors to narrow the concentration range and to standardize the analytical conditions, (ii) post-acquisition data normalization by quality control-based robust LOESS signal correction (QC-RLSC) to correct for potential analytical drift, and (iii) post-acquisition data normalization by MS total useful signal (MSTUS) or probabilistic quotient normalization (PQN) to prevent the impact of concentration variability. This generic strategy was performed with urine samples from healthy individuals and was further implemented in the context of a clinical study to detect alterations in urine metabolomic profiles due to kidney failure. In the case of kidney failure, the relation between creatinine/osmolality and the sample concentration is modified, and relying only on these measurements for normalization could be highly detrimental. The sequential normalization strategy was demonstrated to significantly improve patient stratification by decreasing the unwanted variability and thus enhancing data quality. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Standard addition with internal standardisation as an alternative to using stable isotope labelled internal standards to correct for matrix effects-Comparison and validation using liquid chromatography-​tandem mass spectrometric assay of vitamin D.

    PubMed

    Hewavitharana, Amitha K; Abu Kassim, Nur Sofiah; Shaw, Paul Nicholas

    2018-06-08

    With mass spectrometric detection in liquid chromatography, co-eluting impurities affect the analyte response due to ion suppression/enhancement. Internal standard calibration method, using co-eluting stable isotope labelled analogue of each analyte as the internal standard, is the most appropriate technique available to correct for these matrix effects. However, this technique is not without drawbacks, proved to be expensive because separate internal standard for each analyte is required, and the labelled compounds are expensive or require synthesising. Traditionally, standard addition method has been used to overcome the matrix effects in atomic spectroscopy and was a well-established method. This paper proposes the same for mass spectrometric detection, and demonstrates that the results are comparable to those with the internal standard method using labelled analogues, for vitamin D assay. As conventional standard addition procedure does not address procedural errors, we propose the inclusion of an additional internal standard (not co-eluting). Recoveries determined on human serum samples show that the proposed method of standard addition yields more accurate results than the internal standardisation using stable isotope labelled analogues. The precision of the proposed method of standard addition is superior to the conventional standard addition method. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Analytical Performance Characteristics of the Cepheid GeneXpert Ebola Assay for the Detection of Ebola Virus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinsky, Benjamin A.; Sahoo, Malaya K.; Sandlund, Johanna

    The recently developed Xpert® Ebola Assay is a novel nucleic acid amplification test for simplified detection of Ebola virus (EBOV) in whole blood and buccal swab samples. The assay targets sequences in two EBOV genes, lowering the risk for new variants to escape detection in the test. The objective of this report is to present analytical characteristics of the Xpert® Ebola Assay on whole blood samples. Our study evaluated the assay’s analytical sensitivity, analytical specificity, inclusivity and exclusivity performance in whole blood specimens. EBOV RNA, inactivated EBOV, and infectious EBOV were used as targets. The dynamic range of the assay,more » the inactivation of virus, and specimen stability were also evaluated. The lower limit of detection (LoD) for the assay using inactivated virus was estimated to be 73 copies/mL (95% CI: 51–97 copies/mL). The LoD for infectious virus was estimated to be 1 plaque-forming unit/mL, and for RNA to be 232 copies/mL (95% CI 163–302 copies/mL). The assay correctly identified five different Ebola viruses, Yambuku-Mayinga, Makona-C07, Yambuku-Ecran, Gabon-Ilembe, and Kikwit-956210, and correctly excluded all non-EBOV isolates tested. The conditions used by Xpert® Ebola for inactivation of infectious virus reduced EBOV titer by ≥6 logs. In conclusion, we found the Xpert® Ebola Assay to have high analytical sensitivity and specificity for the detection of EBOV in whole blood. It offers ease of use, fast turnaround time, and remote monitoring. The test has an efficient viral inactivation protocol, fulfills inclusivity and exclusivity criteria, and has specimen stability characteristics consistent with the need for decentralized testing. The simplicity of the assay should enable testing in a wide variety of laboratory settings, including remote laboratories that are not capable of performing highly complex nucleic acid amplification tests, and during outbreaks where time to detection is critical.« less

  12. Analytical Performance Characteristics of the Cepheid GeneXpert Ebola Assay for the Detection of Ebola Virus

    DOE PAGES

    Pinsky, Benjamin A.; Sahoo, Malaya K.; Sandlund, Johanna; ...

    2015-11-12

    The recently developed Xpert® Ebola Assay is a novel nucleic acid amplification test for simplified detection of Ebola virus (EBOV) in whole blood and buccal swab samples. The assay targets sequences in two EBOV genes, lowering the risk for new variants to escape detection in the test. The objective of this report is to present analytical characteristics of the Xpert® Ebola Assay on whole blood samples. Our study evaluated the assay’s analytical sensitivity, analytical specificity, inclusivity and exclusivity performance in whole blood specimens. EBOV RNA, inactivated EBOV, and infectious EBOV were used as targets. The dynamic range of the assay,more » the inactivation of virus, and specimen stability were also evaluated. The lower limit of detection (LoD) for the assay using inactivated virus was estimated to be 73 copies/mL (95% CI: 51–97 copies/mL). The LoD for infectious virus was estimated to be 1 plaque-forming unit/mL, and for RNA to be 232 copies/mL (95% CI 163–302 copies/mL). The assay correctly identified five different Ebola viruses, Yambuku-Mayinga, Makona-C07, Yambuku-Ecran, Gabon-Ilembe, and Kikwit-956210, and correctly excluded all non-EBOV isolates tested. The conditions used by Xpert® Ebola for inactivation of infectious virus reduced EBOV titer by ≥6 logs. In conclusion, we found the Xpert® Ebola Assay to have high analytical sensitivity and specificity for the detection of EBOV in whole blood. It offers ease of use, fast turnaround time, and remote monitoring. The test has an efficient viral inactivation protocol, fulfills inclusivity and exclusivity criteria, and has specimen stability characteristics consistent with the need for decentralized testing. The simplicity of the assay should enable testing in a wide variety of laboratory settings, including remote laboratories that are not capable of performing highly complex nucleic acid amplification tests, and during outbreaks where time to detection is critical.« less

  13. Significance of Polarization Charges and Isomagnetic Surface in Magnetohydrodynamics

    PubMed Central

    Liang, Zhu-Xing; Liang, Yi

    2015-01-01

    From the frozen-in field lines concept, a highly conducting fluid can move freely along, but not traverse to, magnetic field lines. We discuss this topic and find that in the study of the frozen-in field lines concept, the effects of inductive and capacitive reactance have been omitted. When admitted, the relationships among the motional electromotive field, the induced electric field, the eddy electric current, and the magnetic field becomes clearer. We emphasize the importance of isomagnetic surfaces and polarization charges, and show analytically that whether a conducting fluid can freely traverse magnetic field lines or not depends solely on the magnetic gradient along the path of the fluid. If a fluid does not change its density distribution and shape (can be regarded as a quasi-rigid body) and moves along isomagnetic surface, it can freely traverse magnetic field lines without any magnetic drag, no matter how strong the magnetic field is. Besides theoretical analysis, we also present experimental results to support our analysis. The main purpose of this work is to correct a fallacy among some astrophysicists. PMID:26322894

  14. Cross-correlation between EMG and center of gravity during quiet stance: theory and simulations.

    PubMed

    Kohn, André Fabio

    2005-11-01

    Several signal processing tools have been employed in the experimental study of the postural control system in humans. Among them, the cross-correlation function has been used to analyze the time relationship between signals such as the electromyogram and the horizontal projection of the center of gravity. The common finding is that the electromyogram precedes the biomechanical signal, a result that has been interpreted in different ways, for example, the existence of feedforward control or the preponderance of a velocity feedback. It is shown here, analytically and by simulation, that the cross-correlation function is dependent in a complicated way on system parameters and on noise spectra. Results similar to those found experimentally, e.g., electromyogram preceding the biomechanical signal may be obtained in a postural control model without any feedforward control and without any velocity feedback. Therefore, correct interpretations of experimentally obtained cross-correlation functions may require additional information about the system. The results extend to other biomedical applications where two signals from a closed loop system are cross-correlated.

  15. The many voices of Darwin's descendants: reply to Schmitt (2014).

    PubMed

    Eastwick, Paul W; Luchies, Laura B; Finkel, Eli J; Hunt, Lucy L

    2014-05-01

    This article elaborates on evolutionary perspectives relevant to the meta-analytic portion of our recent review (Eastwick, Luchies, Finkel, & Hunt, 2014). We suggested that if men and women evolved sex-differentiated ideals (i.e., mate preferences), then they should exhibit sex-differentiated desires (e.g., romantic attraction) and/or relational outcomes (e.g., relationship satisfaction) with respect to live opposite-sex targets. Our meta-analysis revealed no support for these sex-differentiated desires and relational outcomes in either established relationship or mate selection contexts. With respect to established relationships, Schmitt (2014) has objected to the idea that relationship quality (one of our primarily romantic evaluation dependent measures) has functional relevance. In doing so, he neglects myriad evolutionary perspectives on the adaptive importance of the pair-bond and the wealth of data suggesting that relationship quality predicts the dissolution of pair-bonds. With respect to mate selection, Schmitt (2014) has continued to suggest that sex-differentiated patterns should emerge in these contexts despite the fact that our meta-analysis included this literature and found no sex differences. Schmitt (2014) also generated several novel sex-differentiated predictions with respect to attractiveness and earning prospects, but neither the existing literature nor reanalyses of our meta-analytic data reveal any support for his "proper" function-related hypotheses. In short, there are diverse evolutionary perspectives relevant to mating, including our own synthesis; Schmitt's (2014) conceptual analysis is not the one-and-only evolutionary psychological view, and his alternative explanations for our meta-analytic data remain speculative.

  16. Linking job demands and resources to employee engagement and burnout: a theoretical extension and meta-analytic test.

    PubMed

    Crawford, Eean R; Lepine, Jeffery A; Rich, Bruce Louis

    2010-09-01

    We refine and extend the job demands-resources model with theory regarding appraisal of stressors to account for inconsistencies in relationships between demands and engagement, and we test the revised theory using meta-analytic structural modeling. Results indicate support for the refined and updated theory. First, demands and burnout were positively associated, whereas resources and burnout were negatively associated. Second, whereas relationships among resources and engagement were consistently positive, relationships among demands and engagement were highly dependent on the nature of the demand. Demands that employees tend to appraise as hindrances were negatively associated with engagement, and demands that employees tend to appraise as challenges were positively associated with engagement. Implications for future research are discussed. Copyright 2010 APA, all rights reserved

  17. Thomas-Fermi approximation for a condensate with higher-order interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thoegersen, M.; Jensen, A. S.; Zinner, N. T.

    We consider the ground state of a harmonically trapped Bose-Einstein condensate within the Gross-Pitaevskii theory including the effective-range corrections for a two-body zero-range potential. The resulting nonlinear Schroedinger equation is solved analytically in the Thomas-Fermi approximation neglecting the kinetic-energy term. We present results for the chemical potential and the condensate profiles, discuss boundary conditions, and compare to the usual Thomas-Fermi approach. We discuss several ways to increase the influence of effective-range corrections in experiment with magnetically tunable interactions. The level of tuning required could be inside experimental reach in the near future.

  18. Effects of time ordering in quantum nonlinear optics

    NASA Astrophysics Data System (ADS)

    Quesada, Nicolás; Sipe, J. E.

    2014-12-01

    We study time-ordering corrections to the description of spontaneous parametric down-conversion (SPDC), four-wave mixing (SFWM), and frequency conversion using the Magnus expansion. Analytic approximations to the evolution operator that are unitary are obtained. They are Gaussian preserving, and allow us to understand order-by-order the effects of time ordering. We show that the corrections due to time ordering vanish exactly if the phase-matching function is sufficiently broad. The calculation of the effects of time ordering on the joint spectral amplitude of the photons generated in SPDC and SFWM are reduced to quadrature.

  19. [Management of pre-analytical nonconformities].

    PubMed

    Berkane, Z; Dhondt, J L; Drouillard, I; Flourié, F; Giannoli, J M; Houlbert, C; Surgat, P; Szymanowicz, A

    2010-12-01

    The main nonconformities enumerated to facilitate consensual codification. In each case, an action is defined: refusal to realize the examination with request of a new sample, request of information or correction, results cancellation, nurse or physician information. A traceability of the curative, corrective and preventive actions is needed. Then, methodology and indicators are proposed to assess nonconformity and to follow the quality improvements. The laboratory information system can be used instead of dedicated software. Tools for the follow-up of nonconformities scores are proposed. Finally, we propose an organization and some tools allowing the management and control of the nonconformities occurring during the pre-examination phase.

  20. Large-Nc masses of light mesons from QCD sum rules for nonlinear radial Regge trajectories

    NASA Astrophysics Data System (ADS)

    Afonin, S. S.; Solomko, T. D.

    2018-04-01

    The large-Nc masses of light vector, axial, scalar and pseudoscalar mesons are calculated from QCD spectral sum rules for a particular ansatz interpolating the radial Regge trajectories. The ansatz includes a linear part plus exponentially degreasing corrections to the meson masses and residues. The form of corrections was proposed some time ago for consistency with analytical structure of Operator Product Expansion of the two-point correlation functions. We revised that original analysis and found the second solution for the proposed sum rules. The given solution describes better the spectrum of vector and axial mesons.

  1. Higher-order binding corrections to the Lamb shift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pachucki, K.

    1993-08-15

    In this work a new analytical method for calculating the one-loop self-energy correction to the Lamb shift is presented in detail. The technique relies on division into the low and the high energy parts. The low energy part is calculated using the multipole expansion and the high energy part is calculated by expanding the Dirac-Coulomb propagator in powers of the Coulomb field. The obtained results are in agreement with those previously known, but are more accurate. A new theoretical value of the Lamb shift is also given. 47 refs., 2 figs., 1 tab.

  2. Two-loop virtual top-quark effect on Higgs-boson decay to bottom quarks.

    PubMed

    Butenschön, Mathias; Fugel, Frank; Kniehl, Bernd A

    2007-02-16

    In most of the mass range encompassed by the limits from the direct search and the electroweak precision tests, the Higgs boson of the standard model preferably decays to bottom quarks. We present, in analytic form, the dominant two-loop electroweak correction, of O(GF2mt4), to the partial width of this decay. It amplifies the familiar enhancement due to the O(GFmt2) one-loop correction by about +16% and thus more than compensates the screening by about -8% through strong-interaction effects of order O(alphasGFmt2).

  3. 12 CFR 717.41 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... provides to a consumer reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3...

  4. 12 CFR 717.41 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... provides to a consumer reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3...

  5. 12 CFR 717.41 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... provides to a consumer reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3...

  6. 12 CFR 222.41 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate consumer. (b) Direct dispute means a dispute submitted...

  7. 12 CFR 222.41 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3) Identifies the appropriate consumer. (b) Direct dispute means a dispute submitted...

  8. 12 CFR 334.41 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... furnisher provides to a consumer reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3...

  9. 12 CFR 334.41 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... furnisher provides to a consumer reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3...

  10. 12 CFR 334.41 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... furnisher provides to a consumer reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3...

  11. 12 CFR 334.41 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... furnisher provides to a consumer reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3...

  12. 12 CFR 717.41 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... provides to a consumer reporting agency about an account or other relationship with the consumer correctly: (1) Reflects the terms of and liability for the account or other relationship; (2) Reflects the consumer's performance and other conduct with respect to the account or other relationship; and (3...

  13. Analytical derivatives of the individual state energies in ensemble density functional theory method. I. General formalism

    DOE PAGES

    Filatov, Michael; Liu, Fang; Martínez, Todd J.

    2017-07-21

    The state-averaged (SA) spin restricted ensemble referenced Kohn-Sham (REKS) method and its state interaction (SI) extension, SI-SA-REKS, enable one to describe correctly the shape of the ground and excited potential energy surfaces of molecules undergoing bond breaking/bond formation reactions including features such as conical intersections crucial for theoretical modeling of non-adiabatic reactions. Until recently, application of the SA-REKS and SI-SA-REKS methods to modeling the dynamics of such reactions was obstructed due to the lack of the analytical energy derivatives. Here, the analytical derivatives of the individual SA-REKS and SI-SA-REKS energies are derived. The final analytic gradient expressions are formulated entirelymore » in terms of traces of matrix products and are presented in the form convenient for implementation in the traditional quantum chemical codes employing basis set expansions of the molecular orbitals. Finally, we will describe the implementation and benchmarking of the derived formalism in a subsequent article of this series.« less

  14. Generalized analytic solutions and response characteristics of magnetotelluric fields on anisotropic infinite faults

    NASA Astrophysics Data System (ADS)

    Bing, Xue; Yicai, Ji

    2018-06-01

    In order to understand directly and analyze accurately the detected magnetotelluric (MT) data on anisotropic infinite faults, two-dimensional partial differential equations of MT fields are used to establish a model of anisotropic infinite faults using the Fourier transform method. A multi-fault model is developed to expand the one-fault model. The transverse electric mode and transverse magnetic mode analytic solutions are derived using two-infinite-fault models. The infinite integral terms of the quasi-analytic solutions are discussed. The dual-fault model is computed using the finite element method to verify the correctness of the solutions. The MT responses of isotropic and anisotropic media are calculated to analyze the response functions by different anisotropic conductivity structures. The thickness and conductivity of the media, influencing MT responses, are discussed. The analytic principles are also given. The analysis results are significant to how MT responses are perceived and to the data interpretation of the complex anisotropic infinite faults.

  15. Risk analysis of analytical validations by probabilistic modification of FMEA.

    PubMed

    Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J

    2012-05-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Risk analysis by FMEA as an element of analytical validation.

    PubMed

    van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Oldenhof, M T; Vredenbregt, M J; Barends, D M

    2009-12-05

    We subjected a Near-Infrared (NIR) analytical procedure used for screening drugs on authenticity to a Failure Mode and Effects Analysis (FMEA), including technical risks as well as risks related to human failure. An FMEA team broke down the NIR analytical method into process steps and identified possible failure modes for each step. Each failure mode was ranked on estimated frequency of occurrence (O), probability that the failure would remain undetected later in the process (D) and severity (S), each on a scale of 1-10. Human errors turned out to be the most common cause of failure modes. Failure risks were calculated by Risk Priority Numbers (RPNs)=O x D x S. Failure modes with the highest RPN scores were subjected to corrective actions and the FMEA was repeated, showing reductions in RPN scores and resulting in improvement indices up to 5.0. We recommend risk analysis as an addition to the usual analytical validation, as the FMEA enabled us to detect previously unidentified risks.

  17. Arbitrary-order corrections for finite-time drift and diffusion coefficients

    NASA Astrophysics Data System (ADS)

    Anteneodo, C.; Riera, R.

    2009-09-01

    We address a standard class of diffusion processes with linear drift and quadratic diffusion coefficients. These contributions to dynamic equations can be directly drawn from data time series. However, real data are constrained to finite sampling rates and therefore it is crucial to establish a suitable mathematical description of the required finite-time corrections. Based on Itô-Taylor expansions, we present the exact corrections to the finite-time drift and diffusion coefficients. These results allow to reconstruct the real hidden coefficients from the empirical estimates. We also derive higher-order finite-time expressions for the third and fourth conditional moments that furnish extra theoretical checks for this class of diffusion models. The analytical predictions are compared with the numerical outcomes of representative artificial time series.

  18. Exploring corrections to the Optomechanical Hamiltonian.

    PubMed

    Sala, Kamila; Tufarelli, Tommaso

    2018-06-14

    We compare two approaches for deriving corrections to the "linear model" of cavity optomechanics, in order to describe effects that are beyond first order in the radiation pressure coupling. In the regime where the mechanical frequency is much lower than the cavity one, we compare: (I) a widely used phenomenological Hamiltonian conserving the photon number; (II) a two-mode truncation of C. K. Law's microscopic model, which we take as the "true" system Hamiltonian. While these approaches agree at first order, the latter model does not conserve the photon number, resulting in challenging computations. We find that approach (I) allows for several analytical predictions, and significantly outperforms the linear model in our numerical examples. Yet, we also find that the phenomenological Hamiltonian cannot fully capture all high-order corrections arising from the C. K. Law model.

  19. THE CALCULATION OF BURNABLE POISON CORRECTION FACTORS FOR PWR FRESH FUEL ACTIVE COLLAR MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Favalli, Andrea; Swinhoe, Martyn T.

    2012-06-19

    Verification of commercial low enriched uranium light water reactor fuel takes place at the fuel fabrication facility as part of the overall international nuclear safeguards solution to the civilian use of nuclear technology. The fissile mass per unit length is determined nondestructively by active neutron coincidence counting using a neutron collar. A collar comprises four slabs of high density polyethylene that surround the assembly. Three of the slabs contain {sup 3}He filled proportional counters to detect time correlated fission neutrons induced by an AmLi source placed in the fourth slab. Historically, the response of a particular collar design to amore » particular fuel assembly type has been established by careful cross-calibration to experimental absolute calibrations. Traceability exists to sources and materials held at Los Alamos National Laboratory for over 35 years. This simple yet powerful approach has ensured consistency of application. Since the 1980's there has been a steady improvement in fuel performance. The trend has been to higher burn up. This requires the use of both higher initial enrichment and greater concentrations of burnable poisons. The original analytical relationships to correct for varying fuel composition are consequently being challenged because the experimental basis for them made use of fuels of lower enrichment and lower poison content than is in use today and is envisioned for use in the near term. Thus a reassessment of the correction factors is needed. Experimental reassessment is expensive and time consuming given the great variation between fuel assemblies in circulation. Fortunately current modeling methods enable relative response functions to be calculated with high accuracy. Hence modeling provides a more convenient and cost effective means to derive correction factors which are fit for purpose with confidence. In this work we use the Monte Carlo code MCNPX with neutron coincidence tallies to calculate the influence of Gd{sub 2}O{sub 3} burnable poison on the measurement of fresh pressurized water reactor fuel. To empirically determine the response function over the range of historical and future use we have considered enrichments up to 5 wt% {sup 235}U/{sup tot}U and Gd weight fractions of up to 10 % Gd/UO{sub 2}. Parameterized correction factors are presented.« less

  20. CMB weak-lensing beyond the Born approximation: a numerical approach

    NASA Astrophysics Data System (ADS)

    Fabbian, Giulio; Calabrese, Matteo; Carbone, Carmelita

    2018-02-01

    We perform a complete study of the gravitational lensing effect beyond the Born approximation on the Cosmic Microwave Background (CMB) anisotropies using a multiple-lens raytracing technique through cosmological N-body simulations of the DEMNUni suite. The impact of second-order effects accounting for the non-linear evolution of large-scale structures is evaluated propagating for the first time the full CMB lensing jacobian together with the light rays trajectories. We carefully investigate the robustness of our approach against several numerical effects in the raytracing procedure and in the N-body simulation itself, and find no evidence of large contaminations. We discuss the impact of beyond-Born corrections on lensed CMB observables, and compare our results with recent analytical predictions that appeared in the literature, finding a good agreement, and extend these results to smaller angular scales. We measure the gravitationally-induced CMB polarization rotation that appears in the geodesic equation at second order, and compare this result with the latest analytical predictions. We then present the detection prospect of beyond-Born effects with the future CMB-S4 experiment. We show that corrections to the temperature power spectrum can be measured only if a good control of the extragalactic foregrounds is achieved. Conversely, the beyond-Born corrections on E and B-modes power spectra will be much more difficult to detect.

Top